In recent years, the notion of rank metric in the context of coding theory has known many interesting developments in terms of applications such as space time coding, network coding or public key cryptography. These applications raised the interest of the community for theoretical properties of this type of codes, such as the hardness of decoding in rank metric or better decoding algorithms. Among classical problems associated to codes for a given metric, the notion of code equivalence has always been of the greatest interest. In this talk, we discuss the hardness of the code equivalence problem in rank metric for $mathbb F_{q^m}$–linear and general rank metric codes.
L’informatique quantique est de plus en plus un sujet brûlant, car elle promet bien des avantages, que ça soit pour la complexité de ses algorithmes, ou pour ce qu’elle permet en cryptographie. Dans cet exposé, nous allons d’abord voir les circuits quantiques : le modèle habituellement utilisé par les chercheurs et les ingénieurs pour décrire des processus quantiques. Nous nous intéresserons à une question fondamentale liée à ces circuits, celle de la complétude d’une théorie équationnelle. Nous présenterons ensuite le ZX-Calcul, un modèle issu de la théorie des catégories, qui répond, lui, positivement à cette même question.
Given a partial differential equation (PDE), its solutions can be difficult, if not impossible, to describe. The purpose of the Fundamental theorem of tropical (partial) differential algebraic geometry is to extract from the equations certain properties of the solutions. More precisely, this theorem proves that the support of the solutions in $k[[t_1, cdots, t_m]]$ (with $k$ a field of characteristic zero) can be obtained by solving a so-called tropicalized differential system.
We analyze complex multiplication for Jacobians of curves of genus 3, as well as the resulting Shimura class groups and their subgroups corresponding to Galois conjugation over the reflex field. We combine our results with numerical methods to find CM fields $K$ for which there exist both hyperelliptic and non-hyperelliptic curves whose Jacobian has complex multiplication by $mathbb{Z}_K$.
Toric codes, introduced by Hansen in 2002, generalize (weighted) Reed-Muller codes on other toric varieties than projective spaces. They consist of evaluation codes of monomials at tuples of non-zero coordinates, which correspond to the points on the dense torus contained in the associated toric variety. Our aim is to ‘projectivise’ these codes, in the same spirit that turns a Reed-Muller codes into a projective one: we consider codes obtained by evaluating global sections on the whole set of the rational points of a toric variety. We focus on simplicial toric varieties, which come with a nice quotient description, and we give an explicit construction of projective codes on them, as well as a combinatorial way to determine their parameters. ‘Projectivizing’ toric codes opens new possibilities of getting codes with excellent parameters, by extending some champion classical toric codes geometrically.
Les bornes de Sturm indiquent combien de coefficients de Fourier successifs suffisent à déterminer une forme modulaire. Pour les formes modulaires classiques, elles fournissent aussi des bornes sur le nombre d’opérateurs de Hecke engendrant l’algèbre du même nom. Cet exposé propose d’étudier la situation pour certaines formes automorphes, dites de Drinfeld, sur les corps de fonctions. Il s’agit d’un travail en commun avec Fu-Tsun Wei (National Tsing-Hua University, Taïwan).
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
La détermination effective des points entiers sur des variétés algébriques est un problème difficile, surtout en dimension plus grande que 1. Dans cet exposé, je présenterai brièvement deux approches naturelles pour les points entiers qui permettent dans des cas favorables de tous les trouver. En cherchant des raffinements de ces méthodes, on arrive à des problèmes combinatoires intéressants, que je mettrai en valeur dans le cas précis d’une variété “modulaire” de dimension 3, qu’on peut définir par une équation quartique dans $mathbb{P}^4$.
Motivés par la richesse de l’algorithme de Gauss qui permet de calculer efficacement les meilleurs approximation d’un nombre réel par des rationnels, beaucoup de mathématiciens ont proposé des généralisations de ces algorithmes pour approximer des vecteurs de dimension supérieure à 1. Citons pour exemple celui de Poincaré introduit à la fin du 19e siècle ou ceux de Brun et Selmer à la moitié du 20e siècle.
Depuis le début des années 90 à aujourd’hui il y a eu un certain nombre de travaux étudiant la convergence de ces algorithmes. Schweiger et Broise ont notamment démontré que les algorithmes de Selmer et Brun sont convergents et ergodiques. Plus surprenant peut-être, Nogueira a démontré que l’algorithme proposé par Poincaré ne convergeait presque jamais.
En partant du cas classique de l’algorithme de Farey, qui est une version “additive” de l’algorithme de Gauss, je présenterai un point de vu combinatoire sur ces algorithmes qui permet le passage d’une vision déterministe à une approche probabiliste. En effet, dans ce modèle, prendre un vecteur aléatoire pour la mesure de Lebesgue correspondra à suivre une marche aléatoire avec mémoire dans un graphe étiqueté nommé système simplicial. Les lois pour cette marche aléatoire sont élémentaires et nous pouvons développer des techniques probabilistes pour étudier leur comportement dynamique générique. Cela nous mènera à décrire un critère purement de théorie des graphes pour démontrer la convergence ou non d’un algorithme de fraction continue.
Let $E/F$ be an elliptic curve defined over a number field $F$, and let $K/F$ be a quadratic extension. If the analytic rank of $E(K)$ is one, one can often use Heegner points (or the more general Darmon points) to produce (at least conjecturally) a nontorsion generator of $E(K)$. If the analytic rank of $E(K)$ is larger than one, the problem of constructing algebraic points is still very open. In very recent work, Michele Fornea and Lennart Gehrmann have introduced certain $p$-adic quantities that may be conjecturally related to the existence of these points. In this talk I will explain their construction, and illustrate with some numerical experiments that we have been able to carry out that support their conjecture. This is joint work with Michele Fornea and Xevi Guitart.
We study property (FA) and its hereditary version for unit groups of $2$-by-$2$ matrices over orders in totally definite quaternion algebras with rational centres. In particular we consider the three matrix rings over totally definite rational quaternion algebras that can appear as Wedderbrun-Artin components of a group ring $mathbb{Q}G$.
A key step is the construction of amalgamated decompositions of the elementary group $E_2(mathcal O)$, where $mathcal O$ is an order in rational division algebra, and of certain arithmetic groups $Gamma$. The methods for the latter turn out to work in much greater generality and most notably are carried out to obtain amalgam decompositions for the higher modular groups $SL_+(Gamma_n(mathbb Z))$, with $nle 4$, which can be seen as higher dimensional versions of modular and Bianchi groups.
Let $K$ be a quadratic field which is not an imaginary quadratic field of class number one. We describe an algorithm to compute a superset of the set of primes $p$ for which there exists an elliptic curve over $K$ admitting a $K$-rational $p$-isogeny. Combining this algorithm with recent work on the determination of quadratic points on low-genus modular curves, we determine - conditional upon the Generalised Riemann Hypothesis - the above set of isogeny primes for several quadratic fields, providing the first such examples after Mazur’s 1978 determination for $K = mathbb{Q}$. We will give a live demo of the Sage and PARI/GP implementations of the algorithm.
In this talk, we will try to review the state-of-the-art of the algorithms for solving the Euclidean lattice problems underlying cryptography. In more details, this talk contains two parts. In the first part, we will focus on the lattice problems such as approximate Shortest Vector Problem (approx-SVP) and the lattice reduction algorithms as the best known solving algorithms so far. In particular, I will present an improved enumeration-based lattice reduction algorithm, which is shown to be (potentially) relevant to cryptanalysis. In the second part, we will instead consider a quantum problem that is computationally equivalent to approx-SVP. By directly solving a quantum problem, we may expect to have a more powerful use of the quantum computation. However, the best known algorithms for solving approx-SVP via solving this quantum problem, is not better than lattice reduction yet.
Pairings on elliptic curves are involved in signatures, NIZK, and recently in blockchains (ZK-SNARKS). These pairings take as input two points on an elliptic curve $E$ over a finite field, and output a value in an extension of that finite field. Usually for efficiency reasons, this extension degree is a power of 2 and 3 (such as 12, 18, 24), and moreover the characteristic of the finite field has a special form. The security relies on the hardness of computing discrete logarithms in the group of points of the curve and in the finite field extension.
In 2013-2016, new variants of the function field sieve and the number field sieve algorithms turned out to be faster in certain finite fields related to pairing-based cryptography, in particular those which had a very efficient arithmetic. Now small characteristic settings are discarded. The situation for $GF(p^k)$ where $p$ is prime and $k$ is small is still quite unclear. We refine the work of Menezes-Sarkar-Singh and Barblescu-Duquesne to estimate the cost of a hypothetical implementation of the Special-Tower-NFS in $GF(p^k)$ for small $k$, and deduce parameter sizes for cryptographic pairings.
Joint work with Shashank Singh, IISER Bhopal, India.
References
On the alpha value of polynomials in the tower number field sieve algorithm, Aurore Guillevic and Shashank Singh, Mathematical Cryptology, Vol 1 No 1 (Feb 2021), journal version, preprint.
A short list of pairing-friendly curves at the 128-bit security level, Aurore Guillevic, presented at PKC’2020 recorded talk, ePrint 2019/1371.
Implementation available with MIT licence on gitlab. Alpha in Magma, alpha and TNFS simulation in SageMath.
Les algorithmes de factorisation d’entiers et ceux de calcul de logarithmes discrets, adaptés aux tailles cryptographiques, ont deux étapes pertinentes pour notre exposé : la sélection polynomiale et la cofactorisation. La première consiste à sélectionner deux polynômes homogènes $F(x,y)$ et $G(x,y)$ dans $mathbb{Z}[x,y]$ tels que les entiers de l’ensemble ${F(a,b)G(a,b)mid a,bin ext{ un rectangle },gcd(a,b)=1 }$ contiennent le plus possible d’entiers $B$-friables (ayant tous les facteurs premiers inférieurs à $B$). La deuxième consiste à factoriser des entiers de la forme $F(a,b)$ et $G(a,b)$.
P. Montgomery (1986) a accéléré la cofactorisation en utilisant des courbes elliptiques correspondant à des points rationnels sur certaines courbes modulaires. Le programme de Mazur peut s’énoncer comme suit : étant donné un corps de nombres $K$, borner le niveau des courbes modulaires qui ont des points $K$-rationnels et calculer effectivement ces points. Des travaux de Rouse, Sutherland, Zureick-Brown et Zywina (2016,2017) ont résolu partiellement le cas où $K=mathbb{Q}$.
Le progrès récent sur le programme B de Mazur (2019-2021) répond partiellement à plusieurs questions a) les points rationnels des courbes modulaires ayant un nombre fini de points; b) les courbes ayant un niveau composé c) les points $K$-rationnels pour les corps $K$ quadratiques d) les points de torsion sur des corps de nombres.
Nous proposons une modification mineure de l’étape de sélection polynomiale. L’état de l’art consiste à construire un grand nombre de polynôme par les méthodes de Kleinjung (2006) et de garder ceux qui optimisent la fonction $alpha$ de Murphy (2000). Ceci correspond de manière empirique à augmenter la probabilité que $F(a,b)$ et $G(a,b)$ soient $B$-friables. Nous proposons de prendre en compte l’existence de familles de courbes elliptiques qui permettent de factoriser rapidement des entiers de la forme $F(a,b)$. Cela revient à décrire les corps de nombres $K$ de degré donné, par exemple $2$, admettant des courbes modulaires qui ont des points $K$-rationnels.
Cette séance spéciale est dédiée à l’accueil des stagiaires dans l’équipe LFANT. Après une présentation générale de l’équipe, nous présenterons deux logiciels que nous développons : PARI/GP et Arb.
We propose several constructions for the original multiplication algorithm of D.V. and G.V. Chudnovsky in order to improve its scalar complexity. We highlight the set of generic strategies who underlay the optimization of the scalar complexity, according to parameterizable criteria. As an example, we apply this analysis to the construction of type elliptic Chudnovsky2 multiplication algorithms for small extensions. As a case study, we significantly improve the Baum-Shokrollahi construction for multiplication in F256/F4.
Nous nous intéresserons aux de courbes de genre 2 dont la Jacobienne est géométriquement le produit de deux courbes elliptiques avec multiplication complexe par le même ordre (maximal). Nous proposerons un algorithme permettant de compter combien d’entre elles ont pour corps de modules Q. Pour cela nous développerons une équivalence de catégories entre certaines variétés abéliennes polarisées et des réseaux hermitiens. Il s’agit d’une généralisation d’un article de A. Gélin, E. Howe et C. Ritzenthaler de 2018 dans lequel la Jacobienne est le carré d’une même courbe elliptique.
The explicit determination of perfect powers in (shifted) non-degenerate, integer, binary linear recurrence sequences has only been achieved in a handful of cases. In this talk, we combine bounds for linear forms in logarithms with results from the modularity of elliptic curves defined over totally real fields to explicitly determine all shifted powers by two in the Fibonacci sequence. A major obstacle that is overcome in this work is that the Hilbert newspace which we are interested in has dimension 6144. We will focus on how this space is computationally handled with respect to the underlying Diophantine equation. This is joint work with Mike Bennett (UBC) and Samir Siksek (Warwick).
Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. The Rank Support Learning problem (RSL) is a variant where an attacker has access to N decoding instances whose errors have the same support and wants to solve one of them. This problem is for instance used in the Durandal signature scheme. In this talk, we will present a new algebraic attack on RSL. We build upon Bardet et al., Asiacrypt 2020, where similar techniques are used to solve MinRank and RD. However, our analysis is simpler and overall our attack relies on very elementary assumptions compared to standard Gröbner bases attacks. In particular, our results show that key recovery attacks on Durandal are more efficient than was previously thought. This is joint work with Magali Bardet.
To an algebraic curve $C$ over the complex numbers one can associate a non-negative integer $g$, the genus, as a measure of its complexity. One can also associate to $C$, via complex analysis, a $g imes g$ symmetric matrix $Omega$ called period matrix. Because of the natural relation between $C$ and $Omega$, one can obtain information about one by studying the other. Therefore, it makes sense to consider the inverse problem: Given a period matrix $Omega$, can we compute a model for the associated curve $C$?
In this talk, we will revise how to construct the period matrix of a curve, we will see some known results around this problem, and discuss an application of its solution.
In this talk, I will study the general problem of when the value of the hypergeometric function $F(a,b;c;z)$ is algebraic, assuming $a$,$b$,$c$, and $z$ rational. The results involve modular forms and functions, complex multiplication, Shimura curves, and computer searches.
L’algorithme de Satoh de comptage de points sur les courbes elliptiques permet d’obtenir (après des améliorations de Harvey) une complexité quasi-quadratique en le degré pour une (petite) caractéristique fixée $p$. Dans cet exposé je passerai en revue plusieurs variantes de cet algorithme et ses extensions aux variétés abéliennes. J’expliquerai ensuite comment on peut grandement simplifier l’implémentation de cet algorithme. L’implémentation dans Pari/GP du nouvel algorithme produit un gain d’un facteur 30 à la fois de temps de calcul et de consommation mémoire.
We show a method to sample an element alpha from a given ideal I, such that their quotient ideal (alpha)/I is a (possibly large) prime times a smooth number (‘near-prime’) with reasonable probability. This method consists of ‘randomizing’ the ideal I by multiplying it with small primes (yielding J) and consequently sampling the element alpha from this randomized ideal J intersected with a large box. The probability that the quotient (alpha)/J is prime (i.e., that the quotient (alpha)/I is a near-prime) is tightly related to density results on prime ideals (prime ideal theorem). As an application we show an efficient way to compute power residue symbols for varying degree number fields.
We will present the signature scheme SQISign, (for Short Quaternion and Isogeny Signature) exploiting isogeny graphs of supersingular elliptic curves. The signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. Its efficient implementation and security analysis open new research challenges.
When $L/K$ is a Galois extension of number fields with Galois group $G$, some invariants of $L$ can be related to those of its proper subfields. I will present some old and some new such relations, and an application to the computation of class groups of some large number fields. This is joint work with Jean-François Biasse, Claus Fieker and Tommy Hofmann.
This work contributes in the field of lattice-based cryptography, a research domain of public key cryptography that was initiated at the end of the 1990s by two different branches. On the one had, there have been proposals benefiting from strong theoretical connections to presumed hard worst-case lattice problems, leading to the development of public key cryptography based on the SIS (Short Integer Solution) and LWE (Learning With Errors) problems. On the other hand, very efficient schemes basing their security on average-case structured lattice problems have been introduced, the most popular among them is the NTRU encryption scheme.
Following the latter approach, Hoffstein and Silverman introduced in 2015 a public key encryption scheme called PASS Encrypt. It is very efficient and fulfills additive and multiplicative homomorphic properties. Unfortunately, a main problem with PASS Encrypt to date is that its security is not well understood, no proof of security was given with respect to the hardness of explicit computational problems, and the scheme is deterministic and hence cannot satisfy the standard notion of IND-CPA security.
In the presented work, we make progress towards understanding the hardness assumptions needed to prove the security of PASS Encrypt. We study the Partial Vandermonde Knapsack problem (PV-Knap) and emphasize its connection to (average-case) ideal lattices. We enlarge the landscape of problems that use the partial Vandermonde matrix by defining a new variant of LWE, called Partial Vandermonde Learning With Errors (PV-LWE). Later, we show the equivalence of PV-Knap and PV-LWE by exploiting the same duality connection as we have for standard Knapsack problems and LWE. In order to provide a security proof for PASS Encrypt, we need to define a variant of PV-Knap, that we call the PASS problem. This problem serves (together with the decision version of PV-Knap) as the underlying hardness assumption for (a slightly modified version of) PASS Encrypt. Furthermore, we present the scheme together with the security proof. We conclude the presentation with some interesting open questions regarding problems using the partial Vandermonde transform.
This is joint work with Amin Sakzad and Ron Steinfeld, currently under submission.
The Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019, was introduced in 2020. The authors performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting exact approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-S-unit lattice, which requires classical subexponential time.
In this work, we extend these experiments to 210 cyclotomic fields of any conductor m and of degree up to 210. Building upon new results from Bernard and Kucera on the Stickelberger ideal, we construct a maximal set of independent S-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-S-unit sublattices fulfilling the role of approximating the full Twisted-PHS lattice. Notably, our obtained approximation factors match those from the Twisted-PHS team in small dimensions, when it is feasible to compute the original log-S-unit lattice.
As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h_{m}^{+}leq O(sqrt{m})$.
The full paper is available on ePrint:2021/1384. This is joint work with Andrea Lesavourey, Tuong-Huy Nguyen and Adeline Roux-Langlois.
Le type d’ordre d’une séquence de points du plan est une généralisation de la permutation associée à une séquence de nombres réels. Cette structure combinatoire encode de nombreuses propriétés géométriques de la séquence de points, par exemple le treillis des faces de son enveloppe convexe, ou encore les triangulations qu’elle supporte.
Cet exposé commencera par une rapide introduction à ces objets. Je discuterai ensuite d’un phénomène de concentration qui apparaît lorsque l’on lit les types d’ordres de séquences de points aléatoires, pour divers modèles naturels. Cette concentration rend difficile une bonne exploration aléatoire de ces structures.
Ceci est un travail conjoint avec Emo Welzl (article ici et ici).
We present a new data structure to approximate accurately and efficiently a polynomial $f$ of degree $d$ given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than $rac{1}{2}$ or greater than $2$.
Let $K$ be a number field and $A/K$ an abelian surface. By the Mordell-Weil theorem, the group of $K$-rational points on $A$ is finitely generated and as for elliptic curves, its rank is predicted by the Birch and Swinnerton-Dyer conjecture. A basic consequence of this conjecture is the parity conjecture: the sign of the functional equation of the $L$-series determines the parity of the rank of $A/K$.
Assuming finiteness of the Shafarevich-Tate group, we prove the parity conjecture for principally polarized abelian surfaces under suitable local constraints. Using a similar approach we show that for two elliptic curves $E_1$ and $E_2$ over $K$ with isomorphic $2$-torsion, the parity conjecture is true for $E_1$ if and only if it is true for $E_2$. In both cases, we prove analogous unconditional results for Selmer groups.
In this talk we will first review the classical criteria to determine the (stable) reduction type of elliptic curves (Tate) and of genus 2 curves (Liu) in terms of the valuations of some particular combinations of their invariants. We will also revisit the theory of cluster pictures to determine the reduction type of hyperelliptic curves (Dokchitser’s et al.). Via Mumford theta constants and Takase and Tomae’s formulas we will be able to read the cluster picture information by looking at the valuations of some (à la Tsuyumine) invariants in the genus 3 case. We will also discuss the possible generalization of this strategy for any genus and some related open questions.
Les codes de Reed-Solomon sont largement utilisés pour représenter des données sous forme de vecteurs, de sorte que les données peuvent être récupérées même si certaines coordonnées des vecteurs sont corrompues. Ces codes ont de nombreuses propriétés. Leurs paramètres sont optimaux. Ils permettent de reconstruire des coordonnées qui ont été effacées. Ils sont compatibles avec l’addition et la multiplication de données. Néanmoins, ils souffrent de certaines limitations. Notamment, la taille de stockage des coordonnées des vecteurs augmente de manière logarithmique avec le nombre de coordonnées. Les codes dits géométriques généralisent les codes de Reed-Solomon en bénéficiant des mêmes propriétés, tout en étant libres de ces limitations. Par conséquent, l’utilisation de codes géométriques apporte des gains de complexité, et s’avère utile dans plusieurs applications telles que le calcul distribué sur les secrets et les preuves zero-knowledge. Les codes géométriques sont construits en évaluant des familles de fonctions, appelées espaces de Riemann-Roch, en les points rationnels d’une courbe. Il s’ensuit que le calcul de ces espaces est crucial pour la mise en œuvre des codes géométriques. Dans cet exposé, je présenterai un travail récent en collaboration avec S. Abelard, A. Couvreur et G. Lecerf sur le calcul effectif des bases des espaces de Riemann-Roch de courbes. Après avoir révisé l’état de l’art sur le sujet, je discuterai des idées à la base de notre algorithme, en particulier la théorie de Brill-Noether et l’utilisation des expansions de Puiseux. Les courbes utilisées dans la construction des codes géométriques sont pour la plupart limitées à celles pour lesquelles les bases de Riemann-Roch sont déjà connues. Ce nouveau travail et ceux qui suivront, permettront la construction de codes géométriques à partir de courbes plus générales.
Oriented Supersingular Isogeny Diffie-Hellman (OSIDH) est un échange de clé post-quantique proposé par Leonardo Colò et David Kohel en 2019. La construction repose sur l’action du groupe de classe d’un ordre quadratique imaginaire sur un espace de courbes elliptiques supersingulières et peut donc être vue comme une généralisation du célèbre échange de clé à base d’isogénies CSIDH. Cependant, OSIDH est très différent de CSIDH d’un point de vue algorithmique parce qu’OSIDH utilise des groupes de classe plus structurés que CSIDH. Comme l’ont reconnu Colò et Kohel eux-mêmes, cela rend OSIDH plus vulnérable aux attaques. Pour contourner cette faiblesse, ils ont proposé une façon ingénieuse d’effectuer l’échange de clé en échangeant de l’information sur l’action du groupe de classe au voisinage des courbes publiques, et ont conjecturé que cette information additionnelle n’impacterait pas la sécurité.
Dans cet exposé, on réévaluera la sécurité d’OSIDH en proposant une nouvelle attaque, inspirée des travaux précédents d’Onuki. Notre attaque est exponentielle mais parvient à casser les paramètres choisi par Colò et Kohel, contrairement à l’attaque d’Onuki. On verra aussi des contremesures possibles à cette attaque, dont on analysera l’impact sur OSIDH d’un point de vue de l’efficacité et de la fonctionnalité.
Les fonctions thêta permettent de relier les points de vue algébrique et analytique dans l’étude des variétés abéliennes: ce sont des formes modulaires de Siegel qui fournissent des coordonnées sur ces variétés et leurs espaces de modules. Rendre ce lien effectif nécessite un algorithme efficace d’évaluation de ces fonctions thêta en un point. Dupont, dans sa thèse (2006), a décrit un algorithme heuristique basé sur la moyenne arithmético-géométrique (AGM) et un schéma de Newton pour évaluer certaines fonctions thêta en genre 1 et 2 en temps quasi-linéaire en la précision. Le but de cet exposé est de montrer que l’on peut en fait obtenir un algorithme certifié dont la complexité est uniforme. Je discuterai également des obstacles restants pour généraliser ce résultat en dimension supérieure.
F. Oort showed that the moduli space of principally polarized supersingular abelian surfaces is a union of rational curves. This is proven by showing that every principally polarized supersingular abelian surface is the Jacobian of a fibre of one of the families of genus 2 curves $pi: mathcal{C}ightarrow mathbb{P}^1$ constructed by L. Moret-Bailly. We present an algorithm that makes this construction effective: Given a point $xin mathbb{P}^1$ we compute a hyperelliptic model of the fibre $pi^{-1}(x)$. The algorithm uses Mumford’s theory of theta groups to compute quotients by the group scheme $alpha_p$.
$\newcommand{F}{mathbb{F}}$Satoh’s algorithm for counting the number of points of an elliptic curve $E/F_q$ with $q=p^n$ is the fastest known algorithm when $p$ is fixed: it computes the invertible eigenvalue $λ$ of the Frobenius to $p$-adic precision $m$ in time $ ilde{O}(p^2 n m)$. Since by Hasse’s bound, recovering $chi_{pi}$ requires working at precision $m=O(n)$, the point counting complexity is of $ ilde{O}(p^2 n^2)$, quasi-quadratic in the degree $n$.
Unfortunately, the term $p^2$ in the complexity makes Satoh’s algorithm suitable only for smaller $p$. For medium sized $p$, one can use Kedlaya’s algorithm which cost $ ilde{O}(p n^2 m)$ or a variant by Harvey’s which cost $ ilde{O}(p^{1/2} n^{5/2} m + n^4 m)$, which have a better complexity on $p$ but a worse one on $n$. For large $p$, the SEA algorithm costs $ ilde{O}(log^4 q)$.
In this talk, we improve the dependency on $p$ of Satoh’s algorithm while retaining the dependency on $n$ to bridge the gap towards medium characteristic. We develop a new algorithm with a complexity of $ ilde{O}(p n m)$. In the particular case where we are furthermore provided with a rational point of $p$-torsion, we even improve this complexity to $ ilde{O}(p^{1/2} n m)$.
This is a joint work with Abdoulaye Maiga.
Solving polynomials is a fundamental computational problem in mathematics. In the real setting, we can use Descartes’ rule of signs to efficiently isolate the real roots of a square-free real polynomial. In this talk, we show how to translate this method into the p-adic worlds. We show how the p-adic analog of Descartes’ rule of signs, Strassman’s theorem, leads to an algorithm to isolate the p-adic roots of a square-free p-adic polynomial and provide some complexity estimates adapting the condition-based complexity framework from real/complex numerical algebraic geometry to the p-adic case.
Dans cet exposé nous allons décrire une approche explicite qui permet de calculer les types automorphes inertiels pour ${m GL}_2$. Nous donnerons ensuite quelques applications de cet algorithme à des problèmes diophantiens ou de nature arithmétique.
In a recent paper Don Zagier mentions a mysterious integer sequence $(a_n) _{n geq 0}$ which arises from a solution of a topological ODE discovered by Marco Bertola, Boris Dubrovin and Di Yang. In my talk I show how to conjecture, prove and even quantify that $(a_n) _{n geq 0}$ actually admits an algebraic generating function which is therefore a very particular period. The methods are based on experimental mathematics and algorithmic ideas in differential Galois theory, which I will show in the interactive part of the talk. The presentation is based on joint work with A. Bostan and J.-A. Weil.
Il s’agit d’un travail en collaboration avec P. Cho, Y. Lee et A. Södergren. Depuis les travaux de Davenport-Heilbronn, beaucoup d’articles ont été ecrits donnant des estimations de plus en plus précises sur le comptage du nombre de corps cubiques de discriminant au plus X. Mentionnons par exemple les travaux de Belabas, Belabas-Bhargava-Pomerance, Bhargava-Shankar-Tsimerman, Taniguchi-Thorne et Bhargava-Taniguchi-Thorne. Dans cet exposé je parlerai d’un résultat négatif, qui montre que l’hypothèse de Riemann implique une limitation sur la plus petite taille possible du terme d’erreur dans ces estimations. Nous approchons la questions à partir de la théorie des petits zéros de fonctions $L$, en particulier la philosophie de Katz-Sarnak et les articles subséquents pour la famille des fonctions zeta de Dedekind de corps cubiques. Je présenterai aussi des résultats numériques obtenus avec pari/gp et le programme «cubic» de Belabas qui indiquent que notre résultat pourrait être optimal.
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds.
This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance.
In this work, we provide generic realisations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the Lattice Isomorphism Problem (LIP). More specifically, we provide:
- a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).
- a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.
- a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP.
The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski’s bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to a poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski’s bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Computing the class group and the unit group of a number field is a famous problem of algorithmic number theory. Recently, it has also become an important problem in cryptography, since it is used in multiple algorithms related to algebraic lattices.
Subexponential time algorithms are known to solve this problem in any number fields, but they heavily rely on heuristics. The only non-heuristic (but still under ERH) known algorithm, due to Hafner and McCurley, is restricted to imaginary quadratic number fields.
In this talk, we will see a rigorous subexponential time algorithm computing units and class group (and more generally S-units) in any number field, assuming the extended Riemann hypothesis.
This is a joint work with Koen de Boer and Benjamin Wesolowski.
Nous donnerons un panorama de certaines techniques et résultats pour le calcul de la cohomologie des groupes arithmétiques de rang $ge 4$ pour des anneaux d’entiers algébriques, ainsi que leurs applications arithmétiques et K-théoriques. Nous ferons ensuite un focus sur les méthodes utilisant le modèle de Voronoi (euclidien ou hermitien), ainsi que plusieurs améliorations algorithmiques. Nous préciserons certains résultats relatifs aux complexes de Voronoi et leurs cellules (pour $mathrm{GL}_N$ avec $N<12$), ainsi qu’un travail en cours avec B. Allombert et R. Coulangeon sur les formes parfaites de rang $N$ sur $mathcal{O}_K$ et la cohomologie de $mathrm{GL}_N(mathcal{O}_K)$ pour certains anneaux d’entiers avec $N=4,5,6$. Nous mentionnerons aussi plusieurs problèmes ouverts relatifs à ces modèles.
L’une des pierres angulaires de la cryptographie des isogénies est l’action (dite CRS), simplement transitive, du groupe des classes d’un ordre d’un corps quadratique imaginaire, sur un certain ensemble de classes d’isomorphismes de courbes elliptiques ordinaires.
L’échange de clé non-interactif basé sur cette action (espace homogène difficile) est relativement lent (de Feo, Kieffer, Smith, 2019) ; la structure du groupe (Beullens, Kleinjung, Vercauteren, 2019) est difficile à calculer. Pour palier à cela, nous décrivons une action, simplement transitive, de la jacobienne d’une courbe hyperelliptique imaginaire, sur un certain ensemble de classes d’isomorphismes de modules de Drinfeld.
Après avoir motivé l’utilisation des modules de Drinfeld en lieu et place des courbes elliptiques, nous décrirons un algorithme efficace de calcul de l’action, ainsi que la récente attaque de Benjamin Wesolowski sur l’échange de clé donné par l’action.
FastECPP is currently the fastest approach to prove the primality of general numbers, and has the additional benefit of creating certificates that can be checked independently and with a lower complexity. It crucially relies on the explicit construction of elliptic curves with complex multiplication.
I will take you on a leisurely stroll through the different phases of the ECPP and fastECPP algorithms, with explanations of their complexity. We will then see the algorithmic choices I have made when integrating a parallelised implementation of fastECPP into my CM software, which has recently been used to prove the primality of a number of record size 50000 digits.
Let $K = mathbb{Q}(alpha_1,dots,alpha_k)$ be an algebraic number field. We are interested in computing polynomial GCDs in $K[x]$ and $K[x_1,dots,x_n]$. Of course we also want to multiply, divide and factor polynomials over $K$. In $K[x]$ we have the Euclidean algorithm but it “blows up”; there is a growth in the size of the rational numbers in the remainders. It is faster to compute the GCD modulo one or more primes and use the Chinese remainder theorem and rational number reconstruction. This leads to computing a GCD in $R[x]$ where $R = K mod p$ is usually not be a field – it is a finite ring.
How do Computer Algebra Systems represent elements of $K$? How do Computer Algebra Systems compute GCDs in $K[x]$? What is the best way to do arithmetic in $R$? How can we compute a polynomial GCD in $K[x_1,dots,x_n]$? In the talk we will try to answer these questions and we will present some timing benchmarks comparing our own C library for computing GCDs in $R[x]$ with Maple and Magma.
SIDH/SIKE was a post quantum key exchange mechanism based on isogenies between supersingular elliptic curves which was recently selected in July 5 2022 by NIST to advance to the fourth round of the PQC competition. It was soon after broken during the summer in a series of three papers by Castryck-Decru, Maino-Martindale and myself.
The attacks all use the extra information on the torsion points used for the key exchange. We first review Petit’s dimension 1 torsion point attack from 2017 which could only apply to unbalanced parameters. Then we explain how the dimension 2 attacks of Maino-Martindale and especially Castryck-Decru could break in heuristic (but in practice very effective) polynomial time some parameters, including the NIST submission where the starting curve $E: y^2=x^3+x$ has explicit endomorphism $i$.
Finally we explain how by going to dimension 8, we could break in proven quasi-linear time all parameters for SIKE.
We will explain how the SIDH protocol worked at the beginning of the talk. We will see that the attack ultimately relies on a very simple 2x2 matrix computation! There will also be (hopefully) fun memes during the talk!
Over a decade ago, Arnold Schönhage proposed a method to compute elementary functions (exp, log, sin, arctan, etc.) efficiently in “medium precision” (up to about 1000 digits) by reducing the argument using linear combinations of pairs of logarithms of primes or Gaussian primes. We generalize this approach to an arbitrary number of primes (which in practice may be 10-20 or more), using an efficient algorithm to solve the associated Diophantine approximation problem. Although theoretically slower than the arithmetic-geometric mean (AGM) by a logarithmic factor, this is now the fastest algorithm in practice to compute elementary functions from about 1000 digits up to millions of digits, giving roughly a factor-two speedup over previous methods. We also discuss the use of optimized Machin-like formulas for simultaneous computation of several logarithms or arctangents of rational numbers, which is required for precomputations.
The method found by W. Castryck and T. Decru to break SIDH requires computing (2^n,2^n)-isogenies from a product of elliptic curves to another abelian surface (which is also a product), which are realized as degree 2 correspondences between curves.
Transposing the attack to the other side of the SIDH exchange involves degree (3,3) isogenies that can be evaluated using either theta functions, or divisors on genus 2 curves. Methods for the curve approach exist for the Jacobian case, but the case of a product of elliptic curves (Bröker, Howe, Lauter, Stevenhagen 2014) can be difficult to implement for cryptographically relevant field sizes due to various limitations in CAS such as SageMath/Singular.
I will explain how traditional algebraic geometry can be called to the rescue to give a simple construction of the curve correspondence associated to the quotient of E_1 x E_2 by an isotropic (3,3) kernel. This leads to a rather fast computation method relying only on elementary field operations and 2 square roots. The journey will bring back some memories of 19th century projective geometry. Theta function experts might recognize familiar objects in the geometric construction.
In our article, we generalize the works of Pan et al. (Eurocrypt’21) and Porter et al. (ArXiv’21) and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we generalize it to any ideal (whose prime factors are not ramified) of any number field. We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time.
We explain how the « embedding lemma » used in the recents attacks against SIDH can be used constructively. Namely we show that every $N$-isogeny between abelian varieties over a finite field admits an efficient representation allowing for its evaluation in time polylogarithmic in $N$. Furthermore, using Vélu’s formula for elliptic curves, or isogenies in the theta model for dimension $g>1$, this representation can be computed in time quasi-linear in $N^g$.
Fixons un entier $n geq 2$, et, pour $X geq 0$, soit $C_n(X)$ l’ensemble des classes d’isomorphisme de corps de nombres de degré $n$ et de discriminant inférieur à $X$ en valeur absolue. La méthode de Hunter-Pohst permet d’énumérer $C_n(X)$ en temps $O(X^{rac{n + 2}{4} + epsilon})$. Pour $n geq 3$, on s’attend à ce que cette complexité ne soit pas optimale : en effet, une conjecture classique, démontrée pour $n leq 5$, prévoit qu’il existe une constante $c_n > 0$ telle que le cardinal de $C_n(X)$ soit équivalent à $c_n X$. En utilisant une paramétrisation des corps cubiques due à Davenport et Heilbronn, Belabas a mis au point un algorithme énumérant $C_3(X)$ en temps optimal $O(X^{1 + epsilon})$. Je montrerai comment une paramétrisation des corps quartiques due à Bhargava permet de manière similaire d’énumérer $C_4(X)$ en temps $O(X^{rac{5}{4} + epsilon})$. Je présenterai ensuite des résultats numériques, ainsi que des perspectives d’amélioration et de généralisation en degré supérieur.
I will describe with numerous examples a new Pari/GP package for infinite continued fractions which can in particular compute numerically the limit, the exact asymptotic speed of convergence (almost never given in the literature), accelerate continued fractions, and especially apply the powerful Apéry acceleration technique to almost all continued fractions, leading to hundreds of new ones.
The security of many existing cryptographic systems relies on the difficulty of solving the discrete logarithm problem (DLP) in a group. For a generic group, we can solve this problem with many algorithms such as the baby-step-giant-step, the Pollard-rho or the Pohlig-Hellman algorithm. For a group with known structure, we use the index calculus algorithm to solve the discrete logarithm problem. Then, the DLP on the Jacobian of a hyperelliptic curve defined over a finite field $mathbb{F}_{q^n}$ with $n>1$ are subject to index calculus attacks. After having chosen a convenient factor basis, the index calculus algorithm has three steps - the decomposition step in which we decompose a random point in the factor basis, the linear algebra step where we solve a matricial equation and the descent phase in which the discrete logarithm is deduced. The complexity of the algorithm crucially depends on the size of the factor basis, since this determines the probability for a point to be decomposed over the base and also the cost of the linear algebra step. Faugère et al (EC 2014) exploit the $2$-torsion point of the curve to reduce the size of the factor basis and then improve the complexity of the index calculus algorithm. In a similar manner, we exploit the endomorphism of the Jacobian to reduce the size of the factor base for certain families of ordinary elliptic curves and genus $2$ hyperelliptic Jacobians defined over finite fields. This approach adds an extra cost when performing operation on the factor basis, but our benchmarks show that reducing the size of the factor base allows to have a gain on the total complexity of index calculus algorithm with respect to the generic attacks.
Outsourcing IT services has become very common worldwide for multiple reasons ranging from costs reduction to improved services. Whatever the actual reason is, the concrete consequence for the company that delegates such services is that a third party ends up with its data in clear because of the well-known limitations of standard encryption.
Ideally, this third party should only learn the minimal information necessary for performing the requested processing, which has motivated the design of countless encryption schemes compatible with specific processing. Such schemes belong to the realm of functional encryption, where the third party recovers a function f(x) from an encryption of x without learning anything else about x, with minimal interaction. Of course, the function f, and hence the encryption scheme, strongly depends on the considered application, which explains the profusion of papers related to this topic. We will focus on the possibility to allow a third party to search the presence of chosen substrings of different lengths (and more !) at any position in the encryption of a stream of data. After an introduction to this problematic and to the associated security notion, we will take a look at the proof of security of one specific construction.
Filtered $(varphi,N)$-modules over a $p$-adic field $K$ are semi-linear objects which are easy to define and can be implemented on a computer. The modules $D_{st}(V)$ defined by $p$-adic Hodge theory, where $V$ is a $p$-adic representation of the absolute Galois group of $K$, provide examples of filtered $(varphi,N)$-modules. When $V$ is nice enough (semi-stable), the data of $D_{st}(V)$ is sufficient to recover $V$. A necessary and sufficient condition for a filtered $(varphi,N)$-module $D$ to be written as $D_{st}(V)$ for some semi-stable representation $V$ is the condition of ``admissibility’’ which imposes conditions on the way the different structures of the $(varphi,N)$-module interact with each other.
In a joint work with Xavier Caruso, we try to provide an algorithm which takes a filtered $(varphi,N)$-module as an input and outputs whether it is admissible or not. I will explain how we can implement filtered $(varphi,N)$-modules on a computer and why this question is well posed. I will then present an algorithm which answers the question if the $(varphi,N)$-module is nice enough and explain the difficulties we are facing both in this nice case and in the general case.
Pour une variété abélienne A sur un corps de nombres K, on sait que pour toute extension finie L/K, le nombre c(L) de points de torsion de A(L) est fini par le théorème de Mordell-Weil.
En fait, un résultat de Masser prédit que c(L) est polynomial en [L:K] (si on fixe A et K) avec un exposant g=dim A, et une conjecture de Hindry et Ratazzi de 2012 donne l’exposant optimal (plus petit que g en général) en fonction d’une certaine structure de la variété abélienne (liée à son groupe dit de Mumford-Tate)
Dans cet exposé, je parlerai d’un travail commun avec Lombardo et Zywina dans lequel nous démontrons une forme inconditionnelle de cette conjecture (et cette conjecture en admettant la conjecture de Mumford-Tate), en insistant sur les résultats intermédiaires qui peuvent être d’intérêt indépendant pour la compréhension des représentations galoisiennes associées à des variétés abéliennes.
In several applications one is interested in a fast computation of the codomain curve of a long chain of cyclic N-isogenies emanating from an elliptic curve E over a finite field Fq, where N = 2, 3, … is some small fixed integer coprime to q. The standard approach proceeds by finding a generator of the kernel of the first N-isogeny, computing its codomain via Vélu’s formulas, then finding a generator of the kernel of the second N-isogeny, and so on. Finding these kernel generators is often the main bottleneck.
In this talk I will explain a new approach to this problem, which was studied in joint work with Thomas Decru, Marc Houben and Frederik Vercauteren. We argue that Vélu’s formulas can be augmented with explicit formulas for the coordinates of a generator of the kernel of an N-isogeny cyclically extending the previous isogeny. These formulas involve the extraction of an N-th root, therefore we call them “radical isogeny formulas”. By varying which N-th root was chosen (i.e., by scaling the radical with different N-th roots of unity) one obtains the kernels of all possible such extensions. Asymptotically, in our main cases of interest this gives a speed-up by a factor 6 or 7 over previous methods.
I will explain the existence of radical isogeny formulas, discuss methods to find them (the formulas become increasingly complicated for larger N), and pose some open questions.
The computation of unit and class groups in arbitrary degree number field is done in polynomial time in a simmilar fashion to the Shor’s factoring algorithm. Contrary to the fixed degree case which was solved in 2001 by Hallgren and a follow-up paper of Schmidt and Vollmer (2005), the arbitrary degree case requires errors estimations and is solved by the conjunction of two papers, Eisenträger et al. (2014) and de Boer et al. (2020).
In the particular case of cyclotomic fields we propose a version of the algorithm which makes use of cyclotomic units. Indeed, the Shor-like procedure of Eisenträger et al.’s algorithm produces random approximations of vectors in the dual of the lattice of units. In order to guarantee the correction of the algorithm, they have to do the computations in high precision and hence require a large amount of qubits. Thanks to the lattice of cyclotomic units, one can do the computations in smaller precision and reduce the number of qubits.
We will discuss an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code.
Computing roots of elements is an important step when solving various tasks in computational number theory. It arises for example during the final step of the General Number Field Sieve~(Lenstra et al. 1993). This problem also intervenes during saturation processes while computing the class group or $S$-units of a number field (Biasse and Fieker).
It is known from the seminal paper introducing the LLL algorithm that one can recover elements of a given number field $K$ given approximations of one of their complex embeddings. This can be used to compute roots of polynomials. In the first part of this presentation, I will describe an extension of this approach that take advantage of a potential subfield $k$, which replace the decoding of one element of $K$ by the decoding $[K:k]$ elements of $k$, to the cost of search in a set of cardinality $d^{[K:k]}$ where $d$ is the degree of the targetted polynomial equation. We will also describe heuristic observations that are useful to speed-up computations.
In the second part of the presentation, we will describe methods to compute $e$-th roots specifically. When $K$ and $e$ are such that there are infinitely many prime integers $p$ such that $orall mathfrak{p} mid p, p^{f(mathfrak{p}mid p)}otequiv1mod e$, we reconstruct $x$ from $ x pmod {p_1}, dots, x pmod {p_r} $ using a generalisation of Thomé’s work on square-roots in the context of the NFS~(Thomé). When this good condition on $K$ and $n$ is not satisfied, one can adapt Couveignes’ approach for square roots~(Couveignes) to relative extensions of number fields $K/k$ provided $[K:k]$ is coprime to $e$ and infinitely many prime integers $p$ are such that each prime ideal $mathfrak{p}$ of $mathcal{O}_k$ above $p$ is inert in $K$.