Responsables : Fredrik Johansson et Aurel Page
The complex multiplication method is well-known for elliptic curves, whereit may be used to construct curves used in primality proofs or to implementcrytosystems, in particular pairing-based ones. A similar approach ispossible for abelian surfaces, that are Jacobians of genus 2 curves,with considerable number theoretic complications. I describe an algorithmusing complex floating point approximations with an asymptotically optimalrunning time, that is, quasi-linear in the size of the class polynomialsproduced as output. Our implementation has been used to carry outparallelised record computations and I present experimental data.
(joint work with Emmanuel Thomé)
The complex multiplication method is well-known for elliptic curves, whereit may be used to construct curves used in primality proofs or to implementcrytosystems, in particular pairing-based ones. A similar approach ispossible for abelian surfaces, that are Jacobians of genus 2 curves,with considerable number theoretic complications. I describe an algorithmusing complex floating point approximations with an asymptotically optimalrunning time, that is, quasi-linear in the size of the class polynomialsproduced as output. Our implementation has been used to carry outparallelised record computations and I present experimental data.
(joint work with Emmanuel Thomé)
$\newcommand{\G}{{\mathcal{G}}}$Diffie?Hellman key exchange is a fundamental primitive in public-keycryptography. If \(\G\) is an abelian group (written additively), thenthe Diffie?Hellman protocol in \(\G\) is composed of four computationsin the form\[ P \longmapsto [m]P = \underbrace{P + \cdots + P}_{m \text{ times}}\]for various points \(P\) and integers \(m\); optimising thisscalar multiplication operation is therefore crucial.
In practice, the most efficient contemporary Diffie?Hellmanimplementations are based on elliptic curves, or Jacobians of genus 2curves. But in these groups, computing \(-P\) is extremely efficient,so we can use the fact that \([m]\left(\pm P\right) = \pm([m]P)\) to simplify andspeed up the protocol, identifying \(P\) with \(-P\) (formally, we areworking in the quotient set \(\G/\langle\pm1\rangle\)).These ``compact?? systems offer significant savings in both space(which translates into slightly shorter keys) and computing time(through simpler pseudo-group law formulae).In the elliptic curve context, this amounts to using only\(x\)-coordinates of points and Montgomery?s pseudo-group law.Bernstein?s Curve25519 software, which has become a de facto referenceimplementation of Diffie?Hellman at the 128-bit security level,is a practical example of these techniques in practice.The genus 2 analogue is Kummer surface arithmetic, where we can useparticularly efficient formulae developed by the Chudnovskys, andpopularized in cryptography by Gaudry.
Recent years have seen renewed interest in theGallant?Lambert?Vanstone (GLV) technique for computing \([m]P\).Here, we suppose our elliptic curve (or our genus 2 Jacobian) has anefficiently computable non-integer endomorphism \(\phi\), which whenapplied to elements of \(\G\) acts like \([\lambda]\) (for some largeeigenvalue \(\lambda\)).Suppose we want to compute \([m]P\): first we use the Euclideanalgorithm to compute much smaller integers \(a\) and \(b\) such that\(a + b\lambda \equiv m \pmod{\#\G}\), and then we compute\[ [m]P = [a]P + [b]\phi(P) \ .\]The running time of the multiexponentiation depends on the size of \(a\)and \(b\), while traditional scalar multiplication depends on the sizeof \(m\). In practice, \(a\) and \(b\) have half the bitlength of\(m\), which means that GLV and its variants can offer us a significantspeedup.
In this talk, we will discuss the adaptation of GLV techniques to\(x\)-coordinate-only and Kummer surface systems. On the practicalside, we will present some experimental results for a new elliptic-curvebased implementation. On the more theoretical side, we will presentsome new formulae for Kummer surface systems with explicit realmultiplication endomorphisms.
This is joint work with Ted Chinburg, Ben McReynolds, Matt Stover and James Sundstrom.
This is joint work with Irene Bouw, Wei Ho, Beth Malmskog, PadmavathiSrinivasan and Christelle Vincent.
Joint work with Fabien Laguillaumie.
Joint work in progress with William Hart and Fredrik Johansson.
This is joint work with Gilles Lachaud and Yih-Dar Shieh.
In fact this is a biproduct of a larger project, in which we construct practical algorithms to solve S-unit, Mordell, cubic Thue, cubic Thue–Mahler, as well as generalized Ramanujan–Nagell equations, and to compute S-integral points on rational elliptic curves with given Mordell–Weil basis. Our algorithms rely on new height bounds, which we obtained using the method of Faltings (Arakelov, Parshin, Szpiro) combined with the Shimura–Taniyama conjecture (without relying on linear forms in logarithms), as well as several improved and new sieves. In addition we used the resulting data to motivate several conjectures and questions, such as Baker’s explicit abc-conjecture, and a new conjecture on the number of S-integral points of rational elliptic curves.
This is joint work with Rafael von Känel.
We will explain how the mod l Galois representation attached to aclassical newform may be efficiently computed, by isolating it amongthe l-torsion of a modular jacobian. This yields a way of computing thecoefficient a(p) of the form in time polynomial in log p, which makesit the most efficient methodknown as of today.
We will show how the Galois representation computations presented inlast week’s talk may be certified, thanks to Serre’s modularityconjecture and explicit group cohomology computations.
This is a joint work with Antoine Joux.
Joint work with Laurent Imbert and Fabien Laguillaumie.
One way to study circulant Hadamard matrices is the so-called``field-descent method’’. The essential fact behind this method is thatcertain cyclotomic integers necessarily are contained in relativelysmall fields and thus must have relatively small complex modulus. Inthis talk, I will present a method which reveals a complementaryphenomenon: certain cyclotomic integers cannot be contained in relativelysmall fields and thus must have relatively large complex modulus. Thismethod provides new necessary conditions for the existence of circulantHadamard matrices.
This is joint work with K. H. Leung.
The best known estimates for $\chi_m(\mathbb{R}^n)$ and$m_1\left(\mathbb{R}^n,\Vert \cdot \Vert\right)$ present relations withEuclidean lattices, in particular with the sphere packing problem.
The determination of $m_1\left(\mathbb{R}^n,\Vert \cdot \Vert\right)$for the Euclidean norm is still a difficult question. We study thisproblem for norms whose unit ball is a convex polytope. More precisely,if the unit ball corresponding with $\Vert \cdot \Vert$ tiles$\mathbb{R}^n$ by translation, for instance if it is the Voronoiregion of a lattice, then it is easy to see that$m_1\left(\mathbb{R}^n,\Vert \cdot \Vert\right)\geq \frac{1}{2^n}$.
C. Bachoc and S. Robins conjectured that equality always holds. We showthat this conjecture is true for $n=2$ and for some families of Voronoiregions of lattices in higher dimensions.
Dans toutes ces méthodes et algorithmes, on passe par des calculs surles nombres $p$-adiques, et le problème de la gestion de la précision yest crucial. Avec Xavier Caruso et David Roe, nous avons développé uneméthode, dite de précision différentielle, pour étudier et gérer laprécision $p$-adique.
Dans cet exposé, nous nous intéresserons à l’application de cetteméthode pour l’étude du calcul d’isogénies entre courbes elliptiquesvia la résolution de certaines équations différentielles $p$-adiques àvariables séparables. Il s’agit d’un travail en commun avec PierreLairez de 2016 qui ne traite que du cas $p>2$. Nous présenterons aussidans cet exposé quelques avancées récentes lorsque $p=2$.
We will first begin the talk with a review of the standard pairingsconstructions on an abelian variety.
L’objectif de cet exposé sera de raconter de jolies mathématiques enlien de ce théorème. Je commencerai par esquisser deux démonstrations“classiques” de ce résultat, puis montrerai comment les combiner pourobtenir une variante effective du théorème de Christol. Je présenteraiensuite une application de ce résultat à une question de naturealgorithmique.
Si le temps le permet, je discuterai également les liens entre théorèmede Christol et équations différentielles p-adiques et montrerai commentutiliser ce nouvel ingrédient pour accélérer l’algorithme précédent.
(Travail en commun avec A. Bostan, G. Christol et Ph. Dumas.)
La surjection canonique \[ \pi_l \: : J_K(C) \longrightarrow J_K(D) =J_K(C) / \mathcal{V}\] est une $(l,l)$-isogénie de noyau $\mathcal{V}$.
Dans cet exposé, on s’intéresse à l’algorithme de Couveignes et Ezomepour trouver l’équation de la courbe $D$ à partir de sa kummerconstruite à l’aide de fonctions définies sur la jacobienne de lacourbe de départ et son quotient, ainsi que les équations quidéfinissent l’isogénie $\pi_l$.
However constructions for general FE are far from practical, or rely onnon-standard and ill-understood cryptographic assumptions.
In this talk I will focus on the construction of efficient FE schemesfor linear functions (i.e. the inner product functionality), and theframework in which our constructions hold. Such schemes yield manypractical applications, and our constructions are the first FE schemesfor inner products modulo a prime that are both efficient and recoverthe result whatever its size. Our framework consist of a cyclic group$G$ where the decision Diffie-Hellman assumption holds together with asubgroup $F$ of $G$ where the discrete logarithm problem is easy. Thissetting can be instantiated with class groups of imaginary quadraticfields, and allows us to encode information in the exponent of thesubgroup $F$, which can be efficiently recovered whatever its size.
Between 2016 and 2018, Astudillo, Diaz y Diaz, Friedman and Ramirez-Raposogave a complete classification of number fields with low regulator for anydegree $\leq 7$ and for totally real and totally complex octic fields,relying both on Remak-Friedman’s inequalities and on a procedurederived by an explicit formula for the Dedekind Zeta function.
In joint work with Giuseppe Molteni, we propose a conjectural improvementof the upper bounds for the discriminant which would allow, using thesame method, to give a classificaton for other signatures in degree 8 and9 : the main conjecture deals with the sharpest estimate for a factorwhich in fact depends on the signature of the fields.
The immediate goal is to create a web-based special functions reference workthat addresses some of the drawbacks of resources such as the NIST DigitalLibrary of Mathematical Functions, the Wolfram Functions site, and Wikipedia.A potential longer-term ambition is to provide a software library forsymbolic knowledge about special functions, usable by computer algebrasystems and theorem proving software.
This talk will discuss the motivation behind the project, design issues,and possible applications.
Ray class fields are generalizations of Hilbert class fields. For apositive integer $m > 0$, the ray class field $H_K(m)$ is obtained byrelaxing the ramification conditions for ideals of $\mathcal{O}_K$dividing $m$.
It turns out that there is a particular subfield $L(m)$ of $H_K(m)$which can be generated using special values of higher-level modularfunctions and Stark’s conjectures. For some values of $m$, this $L(m)$contains the Hilbert class field $H_K(1)$. Thus, we can compute theHilbert class field as a subfield of $L(m)$. In this talk, we find anupper bound for such an integer $m$.
If time permits, we will discuss how to compute the Hilbert class fieldas a subfield of this $L(m)$ when $m = 2$.
Nous parcourrons et ferons une synthèse des propositions actuelles à lacompétition du NIST. Nous nous intéresserons également aux primitives de signature àbase de codes, domaine sensiblement moins développé que le chiffrement.
In this talk, I will explain how we can determine the exact result ofthe SDP. Furthermore, we use these techniques to obtain exact resultsfor the kissing problem of the hemisphere in dimension 8 as well asfor other packing problems.
Using the exact rational solution for the kissing problem of thehemisphere, we can prove that the optimal kissing configuration isunique up to isometry.
This is a joint work with David de Laat and Philippe Moustrou.
On the cryptographic side, many algorithms based on lattices in factuse structured lattices, in order to improve the efficiency of theschemes. Most of the time, these structured lattices are $R$-modules ofsmall dimension, where $R$ is the ring a integers of some number field.It is then tempting to try and adapt the LLL algorithm, which worksover lattices (i.e., $\mathbb Z$-modules), to these $R$-modules.
All the previous works trying to solve this question focused on ringsof integers $R$ that were Euclidean, as the LLL algorithm over $\mathbb Z$crucially rely on the Euclidean division. In this talk, I willdescribe the first LLL algorithm which works in any ring of integers$R$. This algorithm is heuristic and runs in quantum polynomial time ifgiven access to an oracle solving the closest vector problem in afixed lattice, depending only on the ring of integers R.
This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet
On s’intéressera à de telles données compatibles avec l’action d’un groupe $G\subset\mathrm{SO}(E)$, et l’on décrira plus précisément deux situations :
$\bullet$ Action d’un groupe d’ordre 7 en dimension réelle $6$, une sorte d’appendice à l’article de Elkies paru dansThe Eightfold Way (The beauty of Klein’s quartic curve), Cambridge University Press (1999); S. Levy ed.
$\bullet$ Actions de groupes « pas trop petits » en dimension $4$ en relation avec les courbes de genre $2$.
Dans tous les cas l’ingrédient important est le théorème de Torelli.
Le calcul effectif de bases d’espaces de Riemann-Roch intervient dans de nombreux domaines pratiques, notamment pour l’arithmétique dans les jacobiennes de courbes ou dans des codes correcteurs d’erreurs algébraico-géométriques. Nous proposons une variante probabiliste de l’algorithme de Brill et Noether décrit par Goppa pour le calcul d’une base de l’espace de Riemann-Roch $L(D)$ associé à un diviseur $D$ d’une courbe projective plane nodale $C$ sur un corps parfait $k$ suffisamment grand.
On prouve que sa complexité (estimée par le nombre d’opérations arithmétiques dans le corps $k$ est en $O\big(\max\big(\deg(C)^{2\omega}, \deg(D^+)^{\omega}\big)\big)$ où $\omega > 2,38$ est la constante de l’algèbre linéaire et $D^+$ le plus petit diviseur effectif vérifiant $D^+ \geq D$. Cet algorithme probabiliste peut échouer mais sous quelques conditions, on prouve que sa probabilité d’échec est bornée par $O\big(\max\big(\deg(C)^4, \deg(D^+)^2\big)/|E|\big)$ où $E$ est un sous ensemble fini de $k$ dans lequel on peut choisir des éléments de $k$ uniformément aléatoirement.
À notre connaissance cette borne sur la complexité est la meilleure obtenue jusqu’alors pour le calcul d’espaces de Riemann-Roch dans un cadre général. Dans le contexte du calcul de la loi de groupe dans la jacobienne d’une courbe lisse, notre borne améliore aussi la meilleure borne connue à ce jour, due à Khuri-Makdisi. Notre algorithme jouit également du fait que son efficacité repose sur deux blocs pour lesquels des algorithmes efficaces existent : algèbre linéaire et arithmétique des polynômes univariés. Nous avons implémenté cet algorithme en C++/NTL. Les résultats expérimentaux obtenus via cette implémentation semblent indiquer une amélioration des temps de calcul par rapport à l’implémentation dans le logiciel de calcul formel Magma (jusqu’à 200 fois plus rapide sur certaines instances sur de grands corps finis).
We present a fully verified arbitrary-precision integer arithmetic library designed using the Why3 program verifier. It is intended as a verified replacement for the mpn layer of the state-of-the-art GNU Multi-Precision library (GMP).
The formal verification is done using a mix of automated provers and user-provided proof annotations. We have verified the GMP algorithms for addition, subtraction, multiplication (schoolbook and Toom-2/2.5), schoolbook division, divide-and-conquer square root and modular exponentiation. The rest of the mpn API is work in progress. The main challenge is to preserve and verify all the GMP algorithmic tricks in order to get good performance.
Our algorithms are implemented as WhyML functions. We use a dedicated memory model to write them in an imperative style very close to the C language. Such functions can then be extracted straightforwardly to efficient C code. For medium-sized integers (less than 1000 bits, or 100,000 for multiplication), the resulting library is performance-competitive with the generic, pure-C configuration of GMP.
The Cohen–Lenstra–Martinet heuristics are a probabilistic model for the behaviour of class groups of number fields in natural families. In this talk, I will discuss a generalisation to ray class groups. About 5 years ago, Varma determined the average number of 3-torsion elements in the ray class group of K with respect to m, when m is a fixed rational modulus, and K runs through the family of imaginary quadratic or of real quadratic fields. Since then, Bhargava has been challenging the community to come up with a natural probabilistic model that would explain the numbers obtained by Varma, and to predict more general averages in more general families of number fields. As I will explain in my talk, there turns out to be a very simple-minded way of doing so, and also a much more conceptual one, and they both turn out to be equivalent. The more conceptual one involves an object that does not appear to have been treated in the literature before, but that is very natural: the Aralelov ray class group of a number field. This is joint work with Carlo Pagano.
Étant donné une extension galoisienne de corps de nombres L/K, le théorème de Chebotarev affirme l’équirépartition des éléments de Frobenius, relatifs aux idéaux premiers non ramifiés, dans les classes de conjugaison de Gal(L/K). On présentera une étude portant sur les variations du terme d’erreur dans le théorème de Chebotarev, lorsque L/K parcourt certaines familles d’extensions. On donnera une formule de transfert pour les fonctions classiques de décompte des nombres (ou idéaux) premiers permettant de ramener la situation à celle d’une extension des rationnels. On exposera enfin quelques conséquences à des problèmes de “type Linnik” et à l’analogue du phénomène de biais de Chebyshev dans les corps de nombres. L’exposé porte sur un travail commun avec D. Fiorilli.
We will present a p-adic method to compute Galois representations attached to modular forms. This method compares very favourably to the better-known complex-analytic approach. The main ingredient is the use of “moduli-friendly” forms introduced by Makdisi, which allow us to evaluate modular forms at p-adic points of modular curves, and thus to compute in the Jacobian of modular curves without writing down any equations nor q-expansions.
We give an overview of the history of computing Galois groups over p-adic fields, with some diversions to recent progress over the rational field. We focus on the “resolvent method,” a family of techniques to compute Galois groups, and present a recent algorithm to do this in general over p-adic fields, the first of its kind. This algorithm greatly increases the degree of polynomial that can be routinely handled, and for example has been used to extend existing databases of Galois groups of p-adic fields to include all degree 18, 20 and 22 extensions of the 2-adic field. The implementation and tables of results are available on the speaker’s website.
Dans cet exposé, je vais expliquer comment tester efficacement et effectivement si deux représentations galoisiennes modulaires du groupe absolu de Galois des rationnels sont isomorphes. En particulier, je présenterai de nouvelles bornes optimales sur le nombre de traces à tester. Je discuterai également brièvement des graphes des isomorphismes, des résultats associés sur les algèbres de Hecke et de la construction d’une base de données de représentations.
Nous présentons un nouvel algorithme permettant de calculer les polynômes caractéristiques des $p$-courbures d’un opérateur différentiel à coefficients entiers pour tout $p$ premier inférieur à un entier $N$ donné, en temps quasi-linéaire, donc quasi-optimal, en $N$. L’algorithme présenté se base sur les travaux de A. Bostan, X. Caruso et E. Schost ramenant le calcul de cet invariant au calcul d’une factorielle de matrices, ainsi que sur la technique de calcul de factorielles développée par E. Costa, R. Gerbicz et D. Harvey.
Calcium is a C library for real and complex numbers in a form suitable for exact algebraic and symbolic computation. Numbers are represented as elements of fields $mathbb{Q}(a_1,ldots,a_n)$ where the extension numbers $a_k$ may be algebraic or transcendental. The system combines efficient field arithmetic with automatic construction of fields and ideals of algebraic relations, resulting in a practical computational model of $mathbb{R}$ and $mathbb{C}$ in which equality is rigorously decidable for a large class of numbers which includes $overline{mathbb{Q}}$ as a subset.
Étant donné un groupe algébrique $G$ agissant sur un espace affine $V$, il arrive que l’ensemble $V(mathbb{Z})/G(mathbb{Z})$ des orbites entières paramètre des objets arithmétiques et soit donc intéressant à énumérer. Une façon de s’y prendre consiste à expliciter un domaine fondamental de l’action de $G(mathbb{Z})$ sur $V(mathbb{R})$ et à y rechercher les points entiers. Pour cela, on peut essayer de recouvrir ce domaine fondamental par une famille de boules euclidiennes de rayon constant dont le cardinal soit du même ordre de grandeur que le nombre de points entiers. Je montrerai comment mettre en œuvre cette stratégie dans le cas simple de l’action à droite de $mathrm{GL}_n$ sur $mathrm{M}_n$, dont les orbites entières paramètrent les sous-modules de $mathbb{Z}^n$, et pour laquelle on dispose de domaines fondamentaux approchés faciles à décrire, à savoir les ensembles de Siegel.
We consider the problem of deciding whether two matrices are conjugate. If the coefficient ring is a field, this problem can be easily solved by using the Jordan normal form or the rational canonical form. For more general coefficient rings, the situation becomes increasingly challenging, both from a theoretical and a practical viewpoint. In this talk, we show how the conjugacy problem for integer matrices can be efficiently decided using techniques from group and number theory. This is joint work with Bettina Eick and Eamonn O’Brien.
Le biais de Chebyshev est un phénomène qui a été étudié tout d’abord dans le cadre des “courses de nombres premiers” (Rubinstein et Sarnak, 1994) pour expliquer la prédominance apparente des nombres premiers congrus à 3 mod 4 par rapport à ceux qui sont congrus à 1 mod 4. Ces courses de nombres premiers ont été généralisées notamment dans le contexte des corps de nombres par Ng en 2000. Dans des travaux récents, Fiorilli et Jouve ont étudié le biais de Chebyshev dans des familles d’extensions de corps de nombres, et mis en évidence des comportements limites de type “grandes déviations” et “théorème central limite”. Dans le travail présenté, je mets en évidence l’influence considérable qu’ont certains zéros réels de fonctions L d’Artin sur le biais de Chebyshev dans des extensions particulières de corps de nombres.
On présente une méthode effective de calcul sur les $p$-adiques d’isogénies entre courbes elliptiques et Jacobiennes de courbes hyperelliptiques de petit genre. Une application importante est le cas des courbes définies sur un corps fini de petite caractéristique, qui peut être résolu par relèvement dans les $p$-adiques. Notre algorithme repose sur la résolution d’équations différentielles avec un bon contrôle de perte de précision. Son analyse est basée sur la théorie de la précision différentielle développée par Caruso, Roe et Vaccon.
In recent years, the notion of rank metric in the context of coding theory has known many interesting developments in terms of applications such as space time coding, network coding or public key cryptography. These applications raised the interest of the community for theoretical properties of this type of codes, such as the hardness of decoding in rank metric or better decoding algorithms. Among classical problems associated to codes for a given metric, the notion of code equivalence has always been of the greatest interest. In this talk, we discuss the hardness of the code equivalence problem in rank metric for $mathbb F_{q^m}$–linear and general rank metric codes.
L’informatique quantique est de plus en plus un sujet brûlant, car elle promet bien des avantages, que ça soit pour la complexité de ses algorithmes, ou pour ce qu’elle permet en cryptographie. Dans cet exposé, nous allons d’abord voir les circuits quantiques : le modèle habituellement utilisé par les chercheurs et les ingénieurs pour décrire des processus quantiques. Nous nous intéresserons à une question fondamentale liée à ces circuits, celle de la complétude d’une théorie équationnelle. Nous présenterons ensuite le ZX-Calcul, un modèle issu de la théorie des catégories, qui répond, lui, positivement à cette même question.
Given a partial differential equation (PDE), its solutions can be difficult, if not impossible, to describe. The purpose of the Fundamental theorem of tropical (partial) differential algebraic geometry is to extract from the equations certain properties of the solutions. More precisely, this theorem proves that the support of the solutions in $k[[t_1, cdots, t_m]]$ (with $k$ a field of characteristic zero) can be obtained by solving a so-called tropicalized differential system.
We analyze complex multiplication for Jacobians of curves of genus 3, as well as the resulting Shimura class groups and their subgroups corresponding to Galois conjugation over the reflex field. We combine our results with numerical methods to find CM fields $K$ for which there exist both hyperelliptic and non-hyperelliptic curves whose Jacobian has complex multiplication by $mathbb{Z}_K$.
Toric codes, introduced by Hansen in 2002, generalize (weighted) Reed-Muller codes on other toric varieties than projective spaces. They consist of evaluation codes of monomials at tuples of non-zero coordinates, which correspond to the points on the dense torus contained in the associated toric variety. Our aim is to ‘projectivise’ these codes, in the same spirit that turns a Reed-Muller codes into a projective one: we consider codes obtained by evaluating global sections on the whole set of the rational points of a toric variety. We focus on simplicial toric varieties, which come with a nice quotient description, and we give an explicit construction of projective codes on them, as well as a combinatorial way to determine their parameters. ‘Projectivizing’ toric codes opens new possibilities of getting codes with excellent parameters, by extending some champion classical toric codes geometrically.
Les bornes de Sturm indiquent combien de coefficients de Fourier successifs suffisent à déterminer une forme modulaire. Pour les formes modulaires classiques, elles fournissent aussi des bornes sur le nombre d’opérateurs de Hecke engendrant l’algèbre du même nom. Cet exposé propose d’étudier la situation pour certaines formes automorphes, dites de Drinfeld, sur les corps de fonctions. Il s’agit d’un travail en commun avec Fu-Tsun Wei (National Tsing-Hua University, Taïwan).
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
La détermination effective des points entiers sur des variétés algébriques est un problème difficile, surtout en dimension plus grande que 1. Dans cet exposé, je présenterai brièvement deux approches naturelles pour les points entiers qui permettent dans des cas favorables de tous les trouver. En cherchant des raffinements de ces méthodes, on arrive à des problèmes combinatoires intéressants, que je mettrai en valeur dans le cas précis d’une variété “modulaire” de dimension 3, qu’on peut définir par une équation quartique dans $mathbb{P}^4$.
Motivés par la richesse de l’algorithme de Gauss qui permet de calculer efficacement les meilleurs approximation d’un nombre réel par des rationnels, beaucoup de mathématiciens ont proposé des généralisations de ces algorithmes pour approximer des vecteurs de dimension supérieure à 1. Citons pour exemple celui de Poincaré introduit à la fin du 19e siècle ou ceux de Brun et Selmer à la moitié du 20e siècle.
Depuis le début des années 90 à aujourd’hui il y a eu un certain nombre de travaux étudiant la convergence de ces algorithmes. Schweiger et Broise ont notamment démontré que les algorithmes de Selmer et Brun sont convergents et ergodiques. Plus surprenant peut-être, Nogueira a démontré que l’algorithme proposé par Poincaré ne convergeait presque jamais.
En partant du cas classique de l’algorithme de Farey, qui est une version “additive” de l’algorithme de Gauss, je présenterai un point de vu combinatoire sur ces algorithmes qui permet le passage d’une vision déterministe à une approche probabiliste. En effet, dans ce modèle, prendre un vecteur aléatoire pour la mesure de Lebesgue correspondra à suivre une marche aléatoire avec mémoire dans un graphe étiqueté nommé système simplicial. Les lois pour cette marche aléatoire sont élémentaires et nous pouvons développer des techniques probabilistes pour étudier leur comportement dynamique générique. Cela nous mènera à décrire un critère purement de théorie des graphes pour démontrer la convergence ou non d’un algorithme de fraction continue.
Let $E/F$ be an elliptic curve defined over a number field $F$, and let $K/F$ be a quadratic extension. If the analytic rank of $E(K)$ is one, one can often use Heegner points (or the more general Darmon points) to produce (at least conjecturally) a nontorsion generator of $E(K)$. If the analytic rank of $E(K)$ is larger than one, the problem of constructing algebraic points is still very open. In very recent work, Michele Fornea and Lennart Gehrmann have introduced certain $p$-adic quantities that may be conjecturally related to the existence of these points. In this talk I will explain their construction, and illustrate with some numerical experiments that we have been able to carry out that support their conjecture. This is joint work with Michele Fornea and Xevi Guitart.
We study property (FA) and its hereditary version for unit groups of $2$-by-$2$ matrices over orders in totally definite quaternion algebras with rational centres. In particular we consider the three matrix rings over totally definite rational quaternion algebras that can appear as Wedderbrun-Artin components of a group ring $mathbb{Q}G$.
A key step is the construction of amalgamated decompositions of the elementary group $E_2(mathcal O)$, where $mathcal O$ is an order in rational division algebra, and of certain arithmetic groups $Gamma$. The methods for the latter turn out to work in much greater generality and most notably are carried out to obtain amalgam decompositions for the higher modular groups $SL_+(Gamma_n(mathbb Z))$, with $nle 4$, which can be seen as higher dimensional versions of modular and Bianchi groups.
Let $K$ be a quadratic field which is not an imaginary quadratic field of class number one. We describe an algorithm to compute a superset of the set of primes $p$ for which there exists an elliptic curve over $K$ admitting a $K$-rational $p$-isogeny. Combining this algorithm with recent work on the determination of quadratic points on low-genus modular curves, we determine - conditional upon the Generalised Riemann Hypothesis - the above set of isogeny primes for several quadratic fields, providing the first such examples after Mazur’s 1978 determination for $K = mathbb{Q}$. We will give a live demo of the Sage and PARI/GP implementations of the algorithm.
In this talk, we will try to review the state-of-the-art of the algorithms for solving the Euclidean lattice problems underlying cryptography. In more details, this talk contains two parts. In the first part, we will focus on the lattice problems such as approximate Shortest Vector Problem (approx-SVP) and the lattice reduction algorithms as the best known solving algorithms so far. In particular, I will present an improved enumeration-based lattice reduction algorithm, which is shown to be (potentially) relevant to cryptanalysis. In the second part, we will instead consider a quantum problem that is computationally equivalent to approx-SVP. By directly solving a quantum problem, we may expect to have a more powerful use of the quantum computation. However, the best known algorithms for solving approx-SVP via solving this quantum problem, is not better than lattice reduction yet.
Pairings on elliptic curves are involved in signatures, NIZK, and recently in blockchains (ZK-SNARKS). These pairings take as input two points on an elliptic curve $E$ over a finite field, and output a value in an extension of that finite field. Usually for efficiency reasons, this extension degree is a power of 2 and 3 (such as 12, 18, 24), and moreover the characteristic of the finite field has a special form. The security relies on the hardness of computing discrete logarithms in the group of points of the curve and in the finite field extension.
In 2013-2016, new variants of the function field sieve and the number field sieve algorithms turned out to be faster in certain finite fields related to pairing-based cryptography, in particular those which had a very efficient arithmetic. Now small characteristic settings are discarded. The situation for $GF(p^k)$ where $p$ is prime and $k$ is small is still quite unclear. We refine the work of Menezes-Sarkar-Singh and Barblescu-Duquesne to estimate the cost of a hypothetical implementation of the Special-Tower-NFS in $GF(p^k)$ for small $k$, and deduce parameter sizes for cryptographic pairings.
Joint work with Shashank Singh, IISER Bhopal, India.
References
On the alpha value of polynomials in the tower number field sieve algorithm, Aurore Guillevic and Shashank Singh, Mathematical Cryptology, Vol 1 No 1 (Feb 2021), journal version, preprint.
A short list of pairing-friendly curves at the 128-bit security level, Aurore Guillevic, presented at PKC’2020 recorded talk, ePrint 2019/1371.
Implementation available with MIT licence on gitlab. Alpha in Magma, alpha and TNFS simulation in SageMath.
Les algorithmes de factorisation d’entiers et ceux de calcul de logarithmes discrets, adaptés aux tailles cryptographiques, ont deux étapes pertinentes pour notre exposé : la sélection polynomiale et la cofactorisation. La première consiste à sélectionner deux polynômes homogènes $F(x,y)$ et $G(x,y)$ dans $mathbb{Z}[x,y]$ tels que les entiers de l’ensemble ${F(a,b)G(a,b)mid a,bin ext{ un rectangle },gcd(a,b)=1 }$ contiennent le plus possible d’entiers $B$-friables (ayant tous les facteurs premiers inférieurs à $B$). La deuxième consiste à factoriser des entiers de la forme $F(a,b)$ et $G(a,b)$.
P. Montgomery (1986) a accéléré la cofactorisation en utilisant des courbes elliptiques correspondant à des points rationnels sur certaines courbes modulaires. Le programme de Mazur peut s’énoncer comme suit : étant donné un corps de nombres $K$, borner le niveau des courbes modulaires qui ont des points $K$-rationnels et calculer effectivement ces points. Des travaux de Rouse, Sutherland, Zureick-Brown et Zywina (2016,2017) ont résolu partiellement le cas où $K=mathbb{Q}$.
Le progrès récent sur le programme B de Mazur (2019-2021) répond partiellement à plusieurs questions a) les points rationnels des courbes modulaires ayant un nombre fini de points; b) les courbes ayant un niveau composé c) les points $K$-rationnels pour les corps $K$ quadratiques d) les points de torsion sur des corps de nombres.
Nous proposons une modification mineure de l’étape de sélection polynomiale. L’état de l’art consiste à construire un grand nombre de polynôme par les méthodes de Kleinjung (2006) et de garder ceux qui optimisent la fonction $alpha$ de Murphy (2000). Ceci correspond de manière empirique à augmenter la probabilité que $F(a,b)$ et $G(a,b)$ soient $B$-friables. Nous proposons de prendre en compte l’existence de familles de courbes elliptiques qui permettent de factoriser rapidement des entiers de la forme $F(a,b)$. Cela revient à décrire les corps de nombres $K$ de degré donné, par exemple $2$, admettant des courbes modulaires qui ont des points $K$-rationnels.
Cette séance spéciale est dédiée à l’accueil des stagiaires dans l’équipe LFANT. Après une présentation générale de l’équipe, nous présenterons deux logiciels que nous développons : PARI/GP et Arb.
We propose several constructions for the original multiplication algorithm of D.V. and G.V. Chudnovsky in order to improve its scalar complexity. We highlight the set of generic strategies who underlay the optimization of the scalar complexity, according to parameterizable criteria. As an example, we apply this analysis to the construction of type elliptic Chudnovsky2 multiplication algorithms for small extensions. As a case study, we significantly improve the Baum-Shokrollahi construction for multiplication in F256/F4.
Nous nous intéresserons aux de courbes de genre 2 dont la Jacobienne est géométriquement le produit de deux courbes elliptiques avec multiplication complexe par le même ordre (maximal). Nous proposerons un algorithme permettant de compter combien d’entre elles ont pour corps de modules Q. Pour cela nous développerons une équivalence de catégories entre certaines variétés abéliennes polarisées et des réseaux hermitiens. Il s’agit d’une généralisation d’un article de A. Gélin, E. Howe et C. Ritzenthaler de 2018 dans lequel la Jacobienne est le carré d’une même courbe elliptique.
The explicit determination of perfect powers in (shifted) non-degenerate, integer, binary linear recurrence sequences has only been achieved in a handful of cases. In this talk, we combine bounds for linear forms in logarithms with results from the modularity of elliptic curves defined over totally real fields to explicitly determine all shifted powers by two in the Fibonacci sequence. A major obstacle that is overcome in this work is that the Hilbert newspace which we are interested in has dimension 6144. We will focus on how this space is computationally handled with respect to the underlying Diophantine equation. This is joint work with Mike Bennett (UBC) and Samir Siksek (Warwick).
Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. The Rank Support Learning problem (RSL) is a variant where an attacker has access to N decoding instances whose errors have the same support and wants to solve one of them. This problem is for instance used in the Durandal signature scheme. In this talk, we will present a new algebraic attack on RSL. We build upon Bardet et al., Asiacrypt 2020, where similar techniques are used to solve MinRank and RD. However, our analysis is simpler and overall our attack relies on very elementary assumptions compared to standard Gröbner bases attacks. In particular, our results show that key recovery attacks on Durandal are more efficient than was previously thought. This is joint work with Magali Bardet.
To an algebraic curve $C$ over the complex numbers one can associate a non-negative integer $g$, the genus, as a measure of its complexity. One can also associate to $C$, via complex analysis, a $g imes g$ symmetric matrix $Omega$ called period matrix. Because of the natural relation between $C$ and $Omega$, one can obtain information about one by studying the other. Therefore, it makes sense to consider the inverse problem: Given a period matrix $Omega$, can we compute a model for the associated curve $C$?
In this talk, we will revise how to construct the period matrix of a curve, we will see some known results around this problem, and discuss an application of its solution.
In this talk, I will study the general problem of when the value of the hypergeometric function $F(a,b;c;z)$ is algebraic, assuming $a$,$b$,$c$, and $z$ rational. The results involve modular forms and functions, complex multiplication, Shimura curves, and computer searches.
L’algorithme de Satoh de comptage de points sur les courbes elliptiques permet d’obtenir (après des améliorations de Harvey) une complexité quasi-quadratique en le degré pour une (petite) caractéristique fixée $p$. Dans cet exposé je passerai en revue plusieurs variantes de cet algorithme et ses extensions aux variétés abéliennes. J’expliquerai ensuite comment on peut grandement simplifier l’implémentation de cet algorithme. L’implémentation dans Pari/GP du nouvel algorithme produit un gain d’un facteur 30 à la fois de temps de calcul et de consommation mémoire.
We show a method to sample an element alpha from a given ideal I, such that their quotient ideal (alpha)/I is a (possibly large) prime times a smooth number (‘near-prime’) with reasonable probability. This method consists of ‘randomizing’ the ideal I by multiplying it with small primes (yielding J) and consequently sampling the element alpha from this randomized ideal J intersected with a large box. The probability that the quotient (alpha)/J is prime (i.e., that the quotient (alpha)/I is a near-prime) is tightly related to density results on prime ideals (prime ideal theorem). As an application we show an efficient way to compute power residue symbols for varying degree number fields.
We will present the signature scheme SQISign, (for Short Quaternion and Isogeny Signature) exploiting isogeny graphs of supersingular elliptic curves. The signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. Its efficient implementation and security analysis open new research challenges.
When $L/K$ is a Galois extension of number fields with Galois group $G$, some invariants of $L$ can be related to those of its proper subfields. I will present some old and some new such relations, and an application to the computation of class groups of some large number fields. This is joint work with Jean-François Biasse, Claus Fieker and Tommy Hofmann.
This work contributes in the field of lattice-based cryptography, a research domain of public key cryptography that was initiated at the end of the 1990s by two different branches. On the one had, there have been proposals benefiting from strong theoretical connections to presumed hard worst-case lattice problems, leading to the development of public key cryptography based on the SIS (Short Integer Solution) and LWE (Learning With Errors) problems. On the other hand, very efficient schemes basing their security on average-case structured lattice problems have been introduced, the most popular among them is the NTRU encryption scheme.
Following the latter approach, Hoffstein and Silverman introduced in 2015 a public key encryption scheme called PASS Encrypt. It is very efficient and fulfills additive and multiplicative homomorphic properties. Unfortunately, a main problem with PASS Encrypt to date is that its security is not well understood, no proof of security was given with respect to the hardness of explicit computational problems, and the scheme is deterministic and hence cannot satisfy the standard notion of IND-CPA security.
In the presented work, we make progress towards understanding the hardness assumptions needed to prove the security of PASS Encrypt. We study the Partial Vandermonde Knapsack problem (PV-Knap) and emphasize its connection to (average-case) ideal lattices. We enlarge the landscape of problems that use the partial Vandermonde matrix by defining a new variant of LWE, called Partial Vandermonde Learning With Errors (PV-LWE). Later, we show the equivalence of PV-Knap and PV-LWE by exploiting the same duality connection as we have for standard Knapsack problems and LWE. In order to provide a security proof for PASS Encrypt, we need to define a variant of PV-Knap, that we call the PASS problem. This problem serves (together with the decision version of PV-Knap) as the underlying hardness assumption for (a slightly modified version of) PASS Encrypt. Furthermore, we present the scheme together with the security proof. We conclude the presentation with some interesting open questions regarding problems using the partial Vandermonde transform.
This is joint work with Amin Sakzad and Ron Steinfeld, currently under submission.
The Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019, was introduced in 2020. The authors performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting exact approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-S-unit lattice, which requires classical subexponential time.
In this work, we extend these experiments to 210 cyclotomic fields of any conductor m and of degree up to 210. Building upon new results from Bernard and Kucera on the Stickelberger ideal, we construct a maximal set of independent S-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-S-unit sublattices fulfilling the role of approximating the full Twisted-PHS lattice. Notably, our obtained approximation factors match those from the Twisted-PHS team in small dimensions, when it is feasible to compute the original log-S-unit lattice.
As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h_{m}^{+}leq O(sqrt{m})$.
The full paper is available on ePrint:2021/1384. This is joint work with Andrea Lesavourey, Tuong-Huy Nguyen and Adeline Roux-Langlois.
Le type d’ordre d’une séquence de points du plan est une généralisation de la permutation associée à une séquence de nombres réels. Cette structure combinatoire encode de nombreuses propriétés géométriques de la séquence de points, par exemple le treillis des faces de son enveloppe convexe, ou encore les triangulations qu’elle supporte.
Cet exposé commencera par une rapide introduction à ces objets. Je discuterai ensuite d’un phénomène de concentration qui apparaît lorsque l’on lit les types d’ordres de séquences de points aléatoires, pour divers modèles naturels. Cette concentration rend difficile une bonne exploration aléatoire de ces structures.
Ceci est un travail conjoint avec Emo Welzl (article ici et ici).
We present a new data structure to approximate accurately and efficiently a polynomial $f$ of degree $d$ given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than $rac{1}{2}$ or greater than $2$.
Let $K$ be a number field and $A/K$ an abelian surface. By the Mordell-Weil theorem, the group of $K$-rational points on $A$ is finitely generated and as for elliptic curves, its rank is predicted by the Birch and Swinnerton-Dyer conjecture. A basic consequence of this conjecture is the parity conjecture: the sign of the functional equation of the $L$-series determines the parity of the rank of $A/K$.
Assuming finiteness of the Shafarevich-Tate group, we prove the parity conjecture for principally polarized abelian surfaces under suitable local constraints. Using a similar approach we show that for two elliptic curves $E_1$ and $E_2$ over $K$ with isomorphic $2$-torsion, the parity conjecture is true for $E_1$ if and only if it is true for $E_2$. In both cases, we prove analogous unconditional results for Selmer groups.
In this talk we will first review the classical criteria to determine the (stable) reduction type of elliptic curves (Tate) and of genus 2 curves (Liu) in terms of the valuations of some particular combinations of their invariants. We will also revisit the theory of cluster pictures to determine the reduction type of hyperelliptic curves (Dokchitser’s et al.). Via Mumford theta constants and Takase and Tomae’s formulas we will be able to read the cluster picture information by looking at the valuations of some (à la Tsuyumine) invariants in the genus 3 case. We will also discuss the possible generalization of this strategy for any genus and some related open questions.
Les codes de Reed-Solomon sont largement utilisés pour représenter des données sous forme de vecteurs, de sorte que les données peuvent être récupérées même si certaines coordonnées des vecteurs sont corrompues. Ces codes ont de nombreuses propriétés. Leurs paramètres sont optimaux. Ils permettent de reconstruire des coordonnées qui ont été effacées. Ils sont compatibles avec l’addition et la multiplication de données. Néanmoins, ils souffrent de certaines limitations. Notamment, la taille de stockage des coordonnées des vecteurs augmente de manière logarithmique avec le nombre de coordonnées. Les codes dits géométriques généralisent les codes de Reed-Solomon en bénéficiant des mêmes propriétés, tout en étant libres de ces limitations. Par conséquent, l’utilisation de codes géométriques apporte des gains de complexité, et s’avère utile dans plusieurs applications telles que le calcul distribué sur les secrets et les preuves zero-knowledge. Les codes géométriques sont construits en évaluant des familles de fonctions, appelées espaces de Riemann-Roch, en les points rationnels d’une courbe. Il s’ensuit que le calcul de ces espaces est crucial pour la mise en œuvre des codes géométriques. Dans cet exposé, je présenterai un travail récent en collaboration avec S. Abelard, A. Couvreur et G. Lecerf sur le calcul effectif des bases des espaces de Riemann-Roch de courbes. Après avoir révisé l’état de l’art sur le sujet, je discuterai des idées à la base de notre algorithme, en particulier la théorie de Brill-Noether et l’utilisation des expansions de Puiseux. Les courbes utilisées dans la construction des codes géométriques sont pour la plupart limitées à celles pour lesquelles les bases de Riemann-Roch sont déjà connues. Ce nouveau travail et ceux qui suivront, permettront la construction de codes géométriques à partir de courbes plus générales.
Oriented Supersingular Isogeny Diffie-Hellman (OSIDH) est un échange de clé post-quantique proposé par Leonardo Colò et David Kohel en 2019. La construction repose sur l’action du groupe de classe d’un ordre quadratique imaginaire sur un espace de courbes elliptiques supersingulières et peut donc être vue comme une généralisation du célèbre échange de clé à base d’isogénies CSIDH. Cependant, OSIDH est très différent de CSIDH d’un point de vue algorithmique parce qu’OSIDH utilise des groupes de classe plus structurés que CSIDH. Comme l’ont reconnu Colò et Kohel eux-mêmes, cela rend OSIDH plus vulnérable aux attaques. Pour contourner cette faiblesse, ils ont proposé une façon ingénieuse d’effectuer l’échange de clé en échangeant de l’information sur l’action du groupe de classe au voisinage des courbes publiques, et ont conjecturé que cette information additionnelle n’impacterait pas la sécurité.
Dans cet exposé, on réévaluera la sécurité d’OSIDH en proposant une nouvelle attaque, inspirée des travaux précédents d’Onuki. Notre attaque est exponentielle mais parvient à casser les paramètres choisi par Colò et Kohel, contrairement à l’attaque d’Onuki. On verra aussi des contremesures possibles à cette attaque, dont on analysera l’impact sur OSIDH d’un point de vue de l’efficacité et de la fonctionnalité.
Les fonctions thêta permettent de relier les points de vue algébrique et analytique dans l’étude des variétés abéliennes: ce sont des formes modulaires de Siegel qui fournissent des coordonnées sur ces variétés et leurs espaces de modules. Rendre ce lien effectif nécessite un algorithme efficace d’évaluation de ces fonctions thêta en un point. Dupont, dans sa thèse (2006), a décrit un algorithme heuristique basé sur la moyenne arithmético-géométrique (AGM) et un schéma de Newton pour évaluer certaines fonctions thêta en genre 1 et 2 en temps quasi-linéaire en la précision. Le but de cet exposé est de montrer que l’on peut en fait obtenir un algorithme certifié dont la complexité est uniforme. Je discuterai également des obstacles restants pour généraliser ce résultat en dimension supérieure.
F. Oort showed that the moduli space of principally polarized supersingular abelian surfaces is a union of rational curves. This is proven by showing that every principally polarized supersingular abelian surface is the Jacobian of a fibre of one of the families of genus 2 curves $pi: mathcal{C}ightarrow mathbb{P}^1$ constructed by L. Moret-Bailly. We present an algorithm that makes this construction effective: Given a point $xin mathbb{P}^1$ we compute a hyperelliptic model of the fibre $pi^{-1}(x)$. The algorithm uses Mumford’s theory of theta groups to compute quotients by the group scheme $alpha_p$.
$\newcommand{F}{mathbb{F}}$Satoh’s algorithm for counting the number of points of an elliptic curve $E/F_q$ with $q=p^n$ is the fastest known algorithm when $p$ is fixed: it computes the invertible eigenvalue $λ$ of the Frobenius to $p$-adic precision $m$ in time $ ilde{O}(p^2 n m)$. Since by Hasse’s bound, recovering $chi_{pi}$ requires working at precision $m=O(n)$, the point counting complexity is of $ ilde{O}(p^2 n^2)$, quasi-quadratic in the degree $n$.
Unfortunately, the term $p^2$ in the complexity makes Satoh’s algorithm suitable only for smaller $p$. For medium sized $p$, one can use Kedlaya’s algorithm which cost $ ilde{O}(p n^2 m)$ or a variant by Harvey’s which cost $ ilde{O}(p^{1/2} n^{5/2} m + n^4 m)$, which have a better complexity on $p$ but a worse one on $n$. For large $p$, the SEA algorithm costs $ ilde{O}(log^4 q)$.
In this talk, we improve the dependency on $p$ of Satoh’s algorithm while retaining the dependency on $n$ to bridge the gap towards medium characteristic. We develop a new algorithm with a complexity of $ ilde{O}(p n m)$. In the particular case where we are furthermore provided with a rational point of $p$-torsion, we even improve this complexity to $ ilde{O}(p^{1/2} n m)$.
This is a joint work with Abdoulaye Maiga.
Solving polynomials is a fundamental computational problem in mathematics. In the real setting, we can use Descartes’ rule of signs to efficiently isolate the real roots of a square-free real polynomial. In this talk, we show how to translate this method into the p-adic worlds. We show how the p-adic analog of Descartes’ rule of signs, Strassman’s theorem, leads to an algorithm to isolate the p-adic roots of a square-free p-adic polynomial and provide some complexity estimates adapting the condition-based complexity framework from real/complex numerical algebraic geometry to the p-adic case.
Dans cet exposé nous allons décrire une approche explicite qui permet de calculer les types automorphes inertiels pour ${m GL}_2$. Nous donnerons ensuite quelques applications de cet algorithme à des problèmes diophantiens ou de nature arithmétique.
In a recent paper Don Zagier mentions a mysterious integer sequence $(a_n) _{n geq 0}$ which arises from a solution of a topological ODE discovered by Marco Bertola, Boris Dubrovin and Di Yang. In my talk I show how to conjecture, prove and even quantify that $(a_n) _{n geq 0}$ actually admits an algebraic generating function which is therefore a very particular period. The methods are based on experimental mathematics and algorithmic ideas in differential Galois theory, which I will show in the interactive part of the talk. The presentation is based on joint work with A. Bostan and J.-A. Weil.
Il s’agit d’un travail en collaboration avec P. Cho, Y. Lee et A. Södergren. Depuis les travaux de Davenport-Heilbronn, beaucoup d’articles ont été ecrits donnant des estimations de plus en plus précises sur le comptage du nombre de corps cubiques de discriminant au plus X. Mentionnons par exemple les travaux de Belabas, Belabas-Bhargava-Pomerance, Bhargava-Shankar-Tsimerman, Taniguchi-Thorne et Bhargava-Taniguchi-Thorne. Dans cet exposé je parlerai d’un résultat négatif, qui montre que l’hypothèse de Riemann implique une limitation sur la plus petite taille possible du terme d’erreur dans ces estimations. Nous approchons la questions à partir de la théorie des petits zéros de fonctions $L$, en particulier la philosophie de Katz-Sarnak et les articles subséquents pour la famille des fonctions zeta de Dedekind de corps cubiques. Je présenterai aussi des résultats numériques obtenus avec pari/gp et le programme «cubic» de Belabas qui indiquent que notre résultat pourrait être optimal.
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds.
This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance.
In this work, we provide generic realisations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the Lattice Isomorphism Problem (LIP). More specifically, we provide:
- a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).
- a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.
- a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP.
The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski’s bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to a poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski’s bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Computing the class group and the unit group of a number field is a famous problem of algorithmic number theory. Recently, it has also become an important problem in cryptography, since it is used in multiple algorithms related to algebraic lattices.
Subexponential time algorithms are known to solve this problem in any number fields, but they heavily rely on heuristics. The only non-heuristic (but still under ERH) known algorithm, due to Hafner and McCurley, is restricted to imaginary quadratic number fields.
In this talk, we will see a rigorous subexponential time algorithm computing units and class group (and more generally S-units) in any number field, assuming the extended Riemann hypothesis.
This is a joint work with Koen de Boer and Benjamin Wesolowski.
TBA
TBA
TBA
Afficher 2016 - 2015 - antérieurs