In recent years, the notion of rank metric in the context of coding theory has known many interesting developments in terms of applications such as space time coding, network coding or public key cryptography. These applications raised the interest of the community for theoretical properties of this type of codes, such as the hardness of decoding in rank metric or better decoding algorithms. Among classical problems associated to codes for a given metric, the notion of code equivalence has always been of the greatest interest. In this talk, we discuss the hardness of the code equivalence problem in rank metric for $mathbb F_{q^m}$–linear and general rank metric codes.
L’informatique quantique est de plus en plus un sujet brûlant, car elle promet bien des avantages, que ça soit pour la complexité de ses algorithmes, ou pour ce qu’elle permet en cryptographie. Dans cet exposé, nous allons d’abord voir les circuits quantiques : le modèle habituellement utilisé par les chercheurs et les ingénieurs pour décrire des processus quantiques. Nous nous intéresserons à une question fondamentale liée à ces circuits, celle de la complétude d’une théorie équationnelle. Nous présenterons ensuite le ZX-Calcul, un modèle issu de la théorie des catégories, qui répond, lui, positivement à cette même question.
Given a partial differential equation (PDE), its solutions can be difficult, if not impossible, to describe. The purpose of the Fundamental theorem of tropical (partial) differential algebraic geometry is to extract from the equations certain properties of the solutions. More precisely, this theorem proves that the support of the solutions in $k[[t_1, cdots, t_m]]$ (with $k$ a field of characteristic zero) can be obtained by solving a so-called tropicalized differential system.
We analyze complex multiplication for Jacobians of curves of genus 3, as well as the resulting Shimura class groups and their subgroups corresponding to Galois conjugation over the reflex field. We combine our results with numerical methods to find CM fields $K$ for which there exist both hyperelliptic and non-hyperelliptic curves whose Jacobian has complex multiplication by $mathbb{Z}_K$.
Toric codes, introduced by Hansen in 2002, generalize (weighted) Reed-Muller codes on other toric varieties than projective spaces. They consist of evaluation codes of monomials at tuples of non-zero coordinates, which correspond to the points on the dense torus contained in the associated toric variety. Our aim is to ‘projectivise’ these codes, in the same spirit that turns a Reed-Muller codes into a projective one: we consider codes obtained by evaluating global sections on the whole set of the rational points of a toric variety. We focus on simplicial toric varieties, which come with a nice quotient description, and we give an explicit construction of projective codes on them, as well as a combinatorial way to determine their parameters. ‘Projectivizing’ toric codes opens new possibilities of getting codes with excellent parameters, by extending some champion classical toric codes geometrically.
Les bornes de Sturm indiquent combien de coefficients de Fourier successifs suffisent à déterminer une forme modulaire. Pour les formes modulaires classiques, elles fournissent aussi des bornes sur le nombre d’opérateurs de Hecke engendrant l’algèbre du même nom. Cet exposé propose d’étudier la situation pour certaines formes automorphes, dites de Drinfeld, sur les corps de fonctions. Il s’agit d’un travail en commun avec Fu-Tsun Wei (National Tsing-Hua University, Taïwan).
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
La détermination effective des points entiers sur des variétés algébriques est un problème difficile, surtout en dimension plus grande que 1. Dans cet exposé, je présenterai brièvement deux approches naturelles pour les points entiers qui permettent dans des cas favorables de tous les trouver. En cherchant des raffinements de ces méthodes, on arrive à des problèmes combinatoires intéressants, que je mettrai en valeur dans le cas précis d’une variété “modulaire” de dimension 3, qu’on peut définir par une équation quartique dans $mathbb{P}^4$.
Motivés par la richesse de l’algorithme de Gauss qui permet de calculer efficacement les meilleurs approximation d’un nombre réel par des rationnels, beaucoup de mathématiciens ont proposé des généralisations de ces algorithmes pour approximer des vecteurs de dimension supérieure à 1. Citons pour exemple celui de Poincaré introduit à la fin du 19e siècle ou ceux de Brun et Selmer à la moitié du 20e siècle.
Depuis le début des années 90 à aujourd’hui il y a eu un certain nombre de travaux étudiant la convergence de ces algorithmes. Schweiger et Broise ont notamment démontré que les algorithmes de Selmer et Brun sont convergents et ergodiques. Plus surprenant peut-être, Nogueira a démontré que l’algorithme proposé par Poincaré ne convergeait presque jamais.
En partant du cas classique de l’algorithme de Farey, qui est une version “additive” de l’algorithme de Gauss, je présenterai un point de vu combinatoire sur ces algorithmes qui permet le passage d’une vision déterministe à une approche probabiliste. En effet, dans ce modèle, prendre un vecteur aléatoire pour la mesure de Lebesgue correspondra à suivre une marche aléatoire avec mémoire dans un graphe étiqueté nommé système simplicial. Les lois pour cette marche aléatoire sont élémentaires et nous pouvons développer des techniques probabilistes pour étudier leur comportement dynamique générique. Cela nous mènera à décrire un critère purement de théorie des graphes pour démontrer la convergence ou non d’un algorithme de fraction continue.
Let $E/F$ be an elliptic curve defined over a number field $F$, and let $K/F$ be a quadratic extension. If the analytic rank of $E(K)$ is one, one can often use Heegner points (or the more general Darmon points) to produce (at least conjecturally) a nontorsion generator of $E(K)$. If the analytic rank of $E(K)$ is larger than one, the problem of constructing algebraic points is still very open. In very recent work, Michele Fornea and Lennart Gehrmann have introduced certain $p$-adic quantities that may be conjecturally related to the existence of these points. In this talk I will explain their construction, and illustrate with some numerical experiments that we have been able to carry out that support their conjecture. This is joint work with Michele Fornea and Xevi Guitart.
We study property (FA) and its hereditary version for unit groups of $2$-by-$2$ matrices over orders in totally definite quaternion algebras with rational centres. In particular we consider the three matrix rings over totally definite rational quaternion algebras that can appear as Wedderbrun-Artin components of a group ring $mathbb{Q}G$.
A key step is the construction of amalgamated decompositions of the elementary group $E_2(mathcal O)$, where $mathcal O$ is an order in rational division algebra, and of certain arithmetic groups $Gamma$. The methods for the latter turn out to work in much greater generality and most notably are carried out to obtain amalgam decompositions for the higher modular groups $SL_+(Gamma_n(mathbb Z))$, with $nle 4$, which can be seen as higher dimensional versions of modular and Bianchi groups.
Let $K$ be a quadratic field which is not an imaginary quadratic field of class number one. We describe an algorithm to compute a superset of the set of primes $p$ for which there exists an elliptic curve over $K$ admitting a $K$-rational $p$-isogeny. Combining this algorithm with recent work on the determination of quadratic points on low-genus modular curves, we determine - conditional upon the Generalised Riemann Hypothesis - the above set of isogeny primes for several quadratic fields, providing the first such examples after Mazur’s 1978 determination for $K = mathbb{Q}$. We will give a live demo of the Sage and PARI/GP implementations of the algorithm.
In this talk, we will try to review the state-of-the-art of the algorithms for solving the Euclidean lattice problems underlying cryptography. In more details, this talk contains two parts. In the first part, we will focus on the lattice problems such as approximate Shortest Vector Problem (approx-SVP) and the lattice reduction algorithms as the best known solving algorithms so far. In particular, I will present an improved enumeration-based lattice reduction algorithm, which is shown to be (potentially) relevant to cryptanalysis. In the second part, we will instead consider a quantum problem that is computationally equivalent to approx-SVP. By directly solving a quantum problem, we may expect to have a more powerful use of the quantum computation. However, the best known algorithms for solving approx-SVP via solving this quantum problem, is not better than lattice reduction yet.
Pairings on elliptic curves are involved in signatures, NIZK, and recently in blockchains (ZK-SNARKS). These pairings take as input two points on an elliptic curve $E$ over a finite field, and output a value in an extension of that finite field. Usually for efficiency reasons, this extension degree is a power of 2 and 3 (such as 12, 18, 24), and moreover the characteristic of the finite field has a special form. The security relies on the hardness of computing discrete logarithms in the group of points of the curve and in the finite field extension.
In 2013-2016, new variants of the function field sieve and the number field sieve algorithms turned out to be faster in certain finite fields related to pairing-based cryptography, in particular those which had a very efficient arithmetic. Now small characteristic settings are discarded. The situation for $GF(p^k)$ where $p$ is prime and $k$ is small is still quite unclear. We refine the work of Menezes-Sarkar-Singh and Barblescu-Duquesne to estimate the cost of a hypothetical implementation of the Special-Tower-NFS in $GF(p^k)$ for small $k$, and deduce parameter sizes for cryptographic pairings.
Joint work with Shashank Singh, IISER Bhopal, India.
References
On the alpha value of polynomials in the tower number field sieve algorithm, Aurore Guillevic and Shashank Singh, Mathematical Cryptology, Vol 1 No 1 (Feb 2021), journal version, preprint.
A short list of pairing-friendly curves at the 128-bit security level, Aurore Guillevic, presented at PKC’2020 recorded talk, ePrint 2019/1371.
Implementation available with MIT licence on gitlab. Alpha in Magma, alpha and TNFS simulation in SageMath.
Les algorithmes de factorisation d’entiers et ceux de calcul de logarithmes discrets, adaptés aux tailles cryptographiques, ont deux étapes pertinentes pour notre exposé : la sélection polynomiale et la cofactorisation. La première consiste à sélectionner deux polynômes homogènes $F(x,y)$ et $G(x,y)$ dans $mathbb{Z}[x,y]$ tels que les entiers de l’ensemble ${F(a,b)G(a,b)mid a,bin ext{ un rectangle },gcd(a,b)=1 }$ contiennent le plus possible d’entiers $B$-friables (ayant tous les facteurs premiers inférieurs à $B$). La deuxième consiste à factoriser des entiers de la forme $F(a,b)$ et $G(a,b)$.
P. Montgomery (1986) a accéléré la cofactorisation en utilisant des courbes elliptiques correspondant à des points rationnels sur certaines courbes modulaires. Le programme de Mazur peut s’énoncer comme suit : étant donné un corps de nombres $K$, borner le niveau des courbes modulaires qui ont des points $K$-rationnels et calculer effectivement ces points. Des travaux de Rouse, Sutherland, Zureick-Brown et Zywina (2016,2017) ont résolu partiellement le cas où $K=mathbb{Q}$.
Le progrès récent sur le programme B de Mazur (2019-2021) répond partiellement à plusieurs questions a) les points rationnels des courbes modulaires ayant un nombre fini de points; b) les courbes ayant un niveau composé c) les points $K$-rationnels pour les corps $K$ quadratiques d) les points de torsion sur des corps de nombres.
Nous proposons une modification mineure de l’étape de sélection polynomiale. L’état de l’art consiste à construire un grand nombre de polynôme par les méthodes de Kleinjung (2006) et de garder ceux qui optimisent la fonction $alpha$ de Murphy (2000). Ceci correspond de manière empirique à augmenter la probabilité que $F(a,b)$ et $G(a,b)$ soient $B$-friables. Nous proposons de prendre en compte l’existence de familles de courbes elliptiques qui permettent de factoriser rapidement des entiers de la forme $F(a,b)$. Cela revient à décrire les corps de nombres $K$ de degré donné, par exemple $2$, admettant des courbes modulaires qui ont des points $K$-rationnels.
Cette séance spéciale est dédiée à l’accueil des stagiaires dans l’équipe LFANT. Après une présentation générale de l’équipe, nous présenterons deux logiciels que nous développons : PARI/GP et Arb.
We propose several constructions for the original multiplication algorithm of D.V. and G.V. Chudnovsky in order to improve its scalar complexity. We highlight the set of generic strategies who underlay the optimization of the scalar complexity, according to parameterizable criteria. As an example, we apply this analysis to the construction of type elliptic Chudnovsky2 multiplication algorithms for small extensions. As a case study, we significantly improve the Baum-Shokrollahi construction for multiplication in F256/F4.
Nous nous intéresserons aux de courbes de genre 2 dont la Jacobienne est géométriquement le produit de deux courbes elliptiques avec multiplication complexe par le même ordre (maximal). Nous proposerons un algorithme permettant de compter combien d’entre elles ont pour corps de modules Q. Pour cela nous développerons une équivalence de catégories entre certaines variétés abéliennes polarisées et des réseaux hermitiens. Il s’agit d’une généralisation d’un article de A. Gélin, E. Howe et C. Ritzenthaler de 2018 dans lequel la Jacobienne est le carré d’une même courbe elliptique.
The explicit determination of perfect powers in (shifted) non-degenerate, integer, binary linear recurrence sequences has only been achieved in a handful of cases. In this talk, we combine bounds for linear forms in logarithms with results from the modularity of elliptic curves defined over totally real fields to explicitly determine all shifted powers by two in the Fibonacci sequence. A major obstacle that is overcome in this work is that the Hilbert newspace which we are interested in has dimension 6144. We will focus on how this space is computationally handled with respect to the underlying Diophantine equation. This is joint work with Mike Bennett (UBC) and Samir Siksek (Warwick).
Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. The Rank Support Learning problem (RSL) is a variant where an attacker has access to N decoding instances whose errors have the same support and wants to solve one of them. This problem is for instance used in the Durandal signature scheme. In this talk, we will present a new algebraic attack on RSL. We build upon Bardet et al., Asiacrypt 2020, where similar techniques are used to solve MinRank and RD. However, our analysis is simpler and overall our attack relies on very elementary assumptions compared to standard Gröbner bases attacks. In particular, our results show that key recovery attacks on Durandal are more efficient than was previously thought. This is joint work with Magali Bardet.
To an algebraic curve $C$ over the complex numbers one can associate a non-negative integer $g$, the genus, as a measure of its complexity. One can also associate to $C$, via complex analysis, a $g imes g$ symmetric matrix $Omega$ called period matrix. Because of the natural relation between $C$ and $Omega$, one can obtain information about one by studying the other. Therefore, it makes sense to consider the inverse problem: Given a period matrix $Omega$, can we compute a model for the associated curve $C$?
In this talk, we will revise how to construct the period matrix of a curve, we will see some known results around this problem, and discuss an application of its solution.
In this talk, I will study the general problem of when the value of the hypergeometric function $F(a,b;c;z)$ is algebraic, assuming $a$,$b$,$c$, and $z$ rational. The results involve modular forms and functions, complex multiplication, Shimura curves, and computer searches.
L’algorithme de Satoh de comptage de points sur les courbes elliptiques permet d’obtenir (après des améliorations de Harvey) une complexité quasi-quadratique en le degré pour une (petite) caractéristique fixée $p$. Dans cet exposé je passerai en revue plusieurs variantes de cet algorithme et ses extensions aux variétés abéliennes. J’expliquerai ensuite comment on peut grandement simplifier l’implémentation de cet algorithme. L’implémentation dans Pari/GP du nouvel algorithme produit un gain d’un facteur 30 à la fois de temps de calcul et de consommation mémoire.
We show a method to sample an element alpha from a given ideal I, such that their quotient ideal (alpha)/I is a (possibly large) prime times a smooth number (‘near-prime’) with reasonable probability. This method consists of ‘randomizing’ the ideal I by multiplying it with small primes (yielding J) and consequently sampling the element alpha from this randomized ideal J intersected with a large box. The probability that the quotient (alpha)/J is prime (i.e., that the quotient (alpha)/I is a near-prime) is tightly related to density results on prime ideals (prime ideal theorem). As an application we show an efficient way to compute power residue symbols for varying degree number fields.
We will present the signature scheme SQISign, (for Short Quaternion and Isogeny Signature) exploiting isogeny graphs of supersingular elliptic curves. The signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. Its efficient implementation and security analysis open new research challenges.
When $L/K$ is a Galois extension of number fields with Galois group $G$, some invariants of $L$ can be related to those of its proper subfields. I will present some old and some new such relations, and an application to the computation of class groups of some large number fields. This is joint work with Jean-François Biasse, Claus Fieker and Tommy Hofmann.
This work contributes in the field of lattice-based cryptography, a research domain of public key cryptography that was initiated at the end of the 1990s by two different branches. On the one had, there have been proposals benefiting from strong theoretical connections to presumed hard worst-case lattice problems, leading to the development of public key cryptography based on the SIS (Short Integer Solution) and LWE (Learning With Errors) problems. On the other hand, very efficient schemes basing their security on average-case structured lattice problems have been introduced, the most popular among them is the NTRU encryption scheme.
Following the latter approach, Hoffstein and Silverman introduced in 2015 a public key encryption scheme called PASS Encrypt. It is very efficient and fulfills additive and multiplicative homomorphic properties. Unfortunately, a main problem with PASS Encrypt to date is that its security is not well understood, no proof of security was given with respect to the hardness of explicit computational problems, and the scheme is deterministic and hence cannot satisfy the standard notion of IND-CPA security.
In the presented work, we make progress towards understanding the hardness assumptions needed to prove the security of PASS Encrypt. We study the Partial Vandermonde Knapsack problem (PV-Knap) and emphasize its connection to (average-case) ideal lattices. We enlarge the landscape of problems that use the partial Vandermonde matrix by defining a new variant of LWE, called Partial Vandermonde Learning With Errors (PV-LWE). Later, we show the equivalence of PV-Knap and PV-LWE by exploiting the same duality connection as we have for standard Knapsack problems and LWE. In order to provide a security proof for PASS Encrypt, we need to define a variant of PV-Knap, that we call the PASS problem. This problem serves (together with the decision version of PV-Knap) as the underlying hardness assumption for (a slightly modified version of) PASS Encrypt. Furthermore, we present the scheme together with the security proof. We conclude the presentation with some interesting open questions regarding problems using the partial Vandermonde transform.
This is joint work with Amin Sakzad and Ron Steinfeld, currently under submission.
The Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019, was introduced in 2020. The authors performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting exact approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-S-unit lattice, which requires classical subexponential time.
In this work, we extend these experiments to 210 cyclotomic fields of any conductor m and of degree up to 210. Building upon new results from Bernard and Kucera on the Stickelberger ideal, we construct a maximal set of independent S-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-S-unit sublattices fulfilling the role of approximating the full Twisted-PHS lattice. Notably, our obtained approximation factors match those from the Twisted-PHS team in small dimensions, when it is feasible to compute the original log-S-unit lattice.
As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h_{m}^{+}leq O(sqrt{m})$.
The full paper is available on ePrint:2021/1384. This is joint work with Andrea Lesavourey, Tuong-Huy Nguyen and Adeline Roux-Langlois.
Le type d’ordre d’une séquence de points du plan est une généralisation de la permutation associée à une séquence de nombres réels. Cette structure combinatoire encode de nombreuses propriétés géométriques de la séquence de points, par exemple le treillis des faces de son enveloppe convexe, ou encore les triangulations qu’elle supporte.
Cet exposé commencera par une rapide introduction à ces objets. Je discuterai ensuite d’un phénomène de concentration qui apparaît lorsque l’on lit les types d’ordres de séquences de points aléatoires, pour divers modèles naturels. Cette concentration rend difficile une bonne exploration aléatoire de ces structures.
Ceci est un travail conjoint avec Emo Welzl (article ici et ici).
We present a new data structure to approximate accurately and efficiently a polynomial $f$ of degree $d$ given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than $rac{1}{2}$ or greater than $2$.
Let $K$ be a number field and $A/K$ an abelian surface. By the Mordell-Weil theorem, the group of $K$-rational points on $A$ is finitely generated and as for elliptic curves, its rank is predicted by the Birch and Swinnerton-Dyer conjecture. A basic consequence of this conjecture is the parity conjecture: the sign of the functional equation of the $L$-series determines the parity of the rank of $A/K$.
Assuming finiteness of the Shafarevich-Tate group, we prove the parity conjecture for principally polarized abelian surfaces under suitable local constraints. Using a similar approach we show that for two elliptic curves $E_1$ and $E_2$ over $K$ with isomorphic $2$-torsion, the parity conjecture is true for $E_1$ if and only if it is true for $E_2$. In both cases, we prove analogous unconditional results for Selmer groups.
In this talk we will first review the classical criteria to determine the (stable) reduction type of elliptic curves (Tate) and of genus 2 curves (Liu) in terms of the valuations of some particular combinations of their invariants. We will also revisit the theory of cluster pictures to determine the reduction type of hyperelliptic curves (Dokchitser’s et al.). Via Mumford theta constants and Takase and Tomae’s formulas we will be able to read the cluster picture information by looking at the valuations of some (à la Tsuyumine) invariants in the genus 3 case. We will also discuss the possible generalization of this strategy for any genus and some related open questions.
Les codes de Reed-Solomon sont largement utilisés pour représenter des données sous forme de vecteurs, de sorte que les données peuvent être récupérées même si certaines coordonnées des vecteurs sont corrompues. Ces codes ont de nombreuses propriétés. Leurs paramètres sont optimaux. Ils permettent de reconstruire des coordonnées qui ont été effacées. Ils sont compatibles avec l’addition et la multiplication de données. Néanmoins, ils souffrent de certaines limitations. Notamment, la taille de stockage des coordonnées des vecteurs augmente de manière logarithmique avec le nombre de coordonnées. Les codes dits géométriques généralisent les codes de Reed-Solomon en bénéficiant des mêmes propriétés, tout en étant libres de ces limitations. Par conséquent, l’utilisation de codes géométriques apporte des gains de complexité, et s’avère utile dans plusieurs applications telles que le calcul distribué sur les secrets et les preuves zero-knowledge. Les codes géométriques sont construits en évaluant des familles de fonctions, appelées espaces de Riemann-Roch, en les points rationnels d’une courbe. Il s’ensuit que le calcul de ces espaces est crucial pour la mise en œuvre des codes géométriques. Dans cet exposé, je présenterai un travail récent en collaboration avec S. Abelard, A. Couvreur et G. Lecerf sur le calcul effectif des bases des espaces de Riemann-Roch de courbes. Après avoir révisé l’état de l’art sur le sujet, je discuterai des idées à la base de notre algorithme, en particulier la théorie de Brill-Noether et l’utilisation des expansions de Puiseux. Les courbes utilisées dans la construction des codes géométriques sont pour la plupart limitées à celles pour lesquelles les bases de Riemann-Roch sont déjà connues. Ce nouveau travail et ceux qui suivront, permettront la construction de codes géométriques à partir de courbes plus générales.
Oriented Supersingular Isogeny Diffie-Hellman (OSIDH) est un échange de clé post-quantique proposé par Leonardo Colò et David Kohel en 2019. La construction repose sur l’action du groupe de classe d’un ordre quadratique imaginaire sur un espace de courbes elliptiques supersingulières et peut donc être vue comme une généralisation du célèbre échange de clé à base d’isogénies CSIDH. Cependant, OSIDH est très différent de CSIDH d’un point de vue algorithmique parce qu’OSIDH utilise des groupes de classe plus structurés que CSIDH. Comme l’ont reconnu Colò et Kohel eux-mêmes, cela rend OSIDH plus vulnérable aux attaques. Pour contourner cette faiblesse, ils ont proposé une façon ingénieuse d’effectuer l’échange de clé en échangeant de l’information sur l’action du groupe de classe au voisinage des courbes publiques, et ont conjecturé que cette information additionnelle n’impacterait pas la sécurité.
Dans cet exposé, on réévaluera la sécurité d’OSIDH en proposant une nouvelle attaque, inspirée des travaux précédents d’Onuki. Notre attaque est exponentielle mais parvient à casser les paramètres choisi par Colò et Kohel, contrairement à l’attaque d’Onuki. On verra aussi des contremesures possibles à cette attaque, dont on analysera l’impact sur OSIDH d’un point de vue de l’efficacité et de la fonctionnalité.
Les fonctions thêta permettent de relier les points de vue algébrique et analytique dans l’étude des variétés abéliennes: ce sont des formes modulaires de Siegel qui fournissent des coordonnées sur ces variétés et leurs espaces de modules. Rendre ce lien effectif nécessite un algorithme efficace d’évaluation de ces fonctions thêta en un point. Dupont, dans sa thèse (2006), a décrit un algorithme heuristique basé sur la moyenne arithmético-géométrique (AGM) et un schéma de Newton pour évaluer certaines fonctions thêta en genre 1 et 2 en temps quasi-linéaire en la précision. Le but de cet exposé est de montrer que l’on peut en fait obtenir un algorithme certifié dont la complexité est uniforme. Je discuterai également des obstacles restants pour généraliser ce résultat en dimension supérieure.
F. Oort showed that the moduli space of principally polarized supersingular abelian surfaces is a union of rational curves. This is proven by showing that every principally polarized supersingular abelian surface is the Jacobian of a fibre of one of the families of genus 2 curves $pi: mathcal{C}ightarrow mathbb{P}^1$ constructed by L. Moret-Bailly. We present an algorithm that makes this construction effective: Given a point $xin mathbb{P}^1$ we compute a hyperelliptic model of the fibre $pi^{-1}(x)$. The algorithm uses Mumford’s theory of theta groups to compute quotients by the group scheme $alpha_p$.
$\newcommand{F}{mathbb{F}}$Satoh’s algorithm for counting the number of points of an elliptic curve $E/F_q$ with $q=p^n$ is the fastest known algorithm when $p$ is fixed: it computes the invertible eigenvalue $λ$ of the Frobenius to $p$-adic precision $m$ in time $ ilde{O}(p^2 n m)$. Since by Hasse’s bound, recovering $chi_{pi}$ requires working at precision $m=O(n)$, the point counting complexity is of $ ilde{O}(p^2 n^2)$, quasi-quadratic in the degree $n$.
Unfortunately, the term $p^2$ in the complexity makes Satoh’s algorithm suitable only for smaller $p$. For medium sized $p$, one can use Kedlaya’s algorithm which cost $ ilde{O}(p n^2 m)$ or a variant by Harvey’s which cost $ ilde{O}(p^{1/2} n^{5/2} m + n^4 m)$, which have a better complexity on $p$ but a worse one on $n$. For large $p$, the SEA algorithm costs $ ilde{O}(log^4 q)$.
In this talk, we improve the dependency on $p$ of Satoh’s algorithm while retaining the dependency on $n$ to bridge the gap towards medium characteristic. We develop a new algorithm with a complexity of $ ilde{O}(p n m)$. In the particular case where we are furthermore provided with a rational point of $p$-torsion, we even improve this complexity to $ ilde{O}(p^{1/2} n m)$.
This is a joint work with Abdoulaye Maiga.
Solving polynomials is a fundamental computational problem in mathematics. In the real setting, we can use Descartes’ rule of signs to efficiently isolate the real roots of a square-free real polynomial. In this talk, we show how to translate this method into the p-adic worlds. We show how the p-adic analog of Descartes’ rule of signs, Strassman’s theorem, leads to an algorithm to isolate the p-adic roots of a square-free p-adic polynomial and provide some complexity estimates adapting the condition-based complexity framework from real/complex numerical algebraic geometry to the p-adic case.
Dans cet exposé nous allons décrire une approche explicite qui permet de calculer les types automorphes inertiels pour ${m GL}_2$. Nous donnerons ensuite quelques applications de cet algorithme à des problèmes diophantiens ou de nature arithmétique.
In a recent paper Don Zagier mentions a mysterious integer sequence $(a_n) _{n geq 0}$ which arises from a solution of a topological ODE discovered by Marco Bertola, Boris Dubrovin and Di Yang. In my talk I show how to conjecture, prove and even quantify that $(a_n) _{n geq 0}$ actually admits an algebraic generating function which is therefore a very particular period. The methods are based on experimental mathematics and algorithmic ideas in differential Galois theory, which I will show in the interactive part of the talk. The presentation is based on joint work with A. Bostan and J.-A. Weil.
Il s’agit d’un travail en collaboration avec P. Cho, Y. Lee et A. Södergren. Depuis les travaux de Davenport-Heilbronn, beaucoup d’articles ont été ecrits donnant des estimations de plus en plus précises sur le comptage du nombre de corps cubiques de discriminant au plus X. Mentionnons par exemple les travaux de Belabas, Belabas-Bhargava-Pomerance, Bhargava-Shankar-Tsimerman, Taniguchi-Thorne et Bhargava-Taniguchi-Thorne. Dans cet exposé je parlerai d’un résultat négatif, qui montre que l’hypothèse de Riemann implique une limitation sur la plus petite taille possible du terme d’erreur dans ces estimations. Nous approchons la questions à partir de la théorie des petits zéros de fonctions $L$, en particulier la philosophie de Katz-Sarnak et les articles subséquents pour la famille des fonctions zeta de Dedekind de corps cubiques. Je présenterai aussi des résultats numériques obtenus avec pari/gp et le programme «cubic» de Belabas qui indiquent que notre résultat pourrait être optimal.
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds.
This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance.
In this work, we provide generic realisations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the Lattice Isomorphism Problem (LIP). More specifically, we provide:
- a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).
- a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.
- a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP.
The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski’s bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to a poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski’s bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Computing the class group and the unit group of a number field is a famous problem of algorithmic number theory. Recently, it has also become an important problem in cryptography, since it is used in multiple algorithms related to algebraic lattices.
Subexponential time algorithms are known to solve this problem in any number fields, but they heavily rely on heuristics. The only non-heuristic (but still under ERH) known algorithm, due to Hafner and McCurley, is restricted to imaginary quadratic number fields.
In this talk, we will see a rigorous subexponential time algorithm computing units and class group (and more generally S-units) in any number field, assuming the extended Riemann hypothesis.
This is a joint work with Koen de Boer and Benjamin Wesolowski.
TBA
TBA
TBA