Responsables : Razvan Barbulescu et Léo Poyeton
In fact this is a biproduct of a larger project, in which we construct practical algorithms to solve S-unit, Mordell, cubic Thue, cubic Thue–Mahler, as well as generalized Ramanujan–Nagell equations, and to compute S-integral points on rational elliptic curves with given Mordell–Weil basis. Our algorithms rely on new height bounds, which we obtained using the method of Faltings (Arakelov, Parshin, Szpiro) combined with the Shimura–Taniyama conjecture (without relying on linear forms in logarithms), as well as several improved and new sieves. In addition we used the resulting data to motivate several conjectures and questions, such as Baker’s explicit abc-conjecture, and a new conjecture on the number of S-integral points of rational elliptic curves.
This is joint work with Rafael von Känel.
We will explain how the mod l Galois representation attached to aclassical newform may be efficiently computed, by isolating it amongthe l-torsion of a modular jacobian. This yields a way of computing thecoefficient a(p) of the form in time polynomial in log p, which makesit the most efficient methodknown as of today.
We will show how the Galois representation computations presented inlast week’s talk may be certified, thanks to Serre’s modularityconjecture and explicit group cohomology computations.
This is a joint work with Antoine Joux.
Joint work with Laurent Imbert and Fabien Laguillaumie.
One way to study circulant Hadamard matrices is the so-called``field-descent method’’. The essential fact behind this method is thatcertain cyclotomic integers necessarily are contained in relativelysmall fields and thus must have relatively small complex modulus. Inthis talk, I will present a method which reveals a complementaryphenomenon: certain cyclotomic integers cannot be contained in relativelysmall fields and thus must have relatively large complex modulus. Thismethod provides new necessary conditions for the existence of circulantHadamard matrices.
This is joint work with K. H. Leung.
The best known estimates for $\chi_m(\mathbb{R}^n)$ and$m_1\left(\mathbb{R}^n,\Vert \cdot \Vert\right)$ present relations withEuclidean lattices, in particular with the sphere packing problem.
The determination of $m_1\left(\mathbb{R}^n,\Vert \cdot \Vert\right)$for the Euclidean norm is still a difficult question. We study thisproblem for norms whose unit ball is a convex polytope. More precisely,if the unit ball corresponding with $\Vert \cdot \Vert$ tiles$\mathbb{R}^n$ by translation, for instance if it is the Voronoiregion of a lattice, then it is easy to see that$m_1\left(\mathbb{R}^n,\Vert \cdot \Vert\right)\geq \frac{1}{2^n}$.
C. Bachoc and S. Robins conjectured that equality always holds. We showthat this conjecture is true for $n=2$ and for some families of Voronoiregions of lattices in higher dimensions.
Dans toutes ces méthodes et algorithmes, on passe par des calculs surles nombres $p$-adiques, et le problème de la gestion de la précision yest crucial. Avec Xavier Caruso et David Roe, nous avons développé uneméthode, dite de précision différentielle, pour étudier et gérer laprécision $p$-adique.
Dans cet exposé, nous nous intéresserons à l’application de cetteméthode pour l’étude du calcul d’isogénies entre courbes elliptiquesvia la résolution de certaines équations différentielles $p$-adiques àvariables séparables. Il s’agit d’un travail en commun avec PierreLairez de 2016 qui ne traite que du cas $p>2$. Nous présenterons aussidans cet exposé quelques avancées récentes lorsque $p=2$.
We will first begin the talk with a review of the standard pairingsconstructions on an abelian variety.
L’objectif de cet exposé sera de raconter de jolies mathématiques enlien de ce théorème. Je commencerai par esquisser deux démonstrations“classiques” de ce résultat, puis montrerai comment les combiner pourobtenir une variante effective du théorème de Christol. Je présenteraiensuite une application de ce résultat à une question de naturealgorithmique.
Si le temps le permet, je discuterai également les liens entre théorèmede Christol et équations différentielles p-adiques et montrerai commentutiliser ce nouvel ingrédient pour accélérer l’algorithme précédent.
(Travail en commun avec A. Bostan, G. Christol et Ph. Dumas.)
La surjection canonique \[ \pi_l \: : J_K(C) \longrightarrow J_K(D) =J_K(C) / \mathcal{V}\] est une $(l,l)$-isogénie de noyau $\mathcal{V}$.
Dans cet exposé, on s’intéresse à l’algorithme de Couveignes et Ezomepour trouver l’équation de la courbe $D$ à partir de sa kummerconstruite à l’aide de fonctions définies sur la jacobienne de lacourbe de départ et son quotient, ainsi que les équations quidéfinissent l’isogénie $\pi_l$.
However constructions for general FE are far from practical, or rely onnon-standard and ill-understood cryptographic assumptions.
In this talk I will focus on the construction of efficient FE schemesfor linear functions (i.e. the inner product functionality), and theframework in which our constructions hold. Such schemes yield manypractical applications, and our constructions are the first FE schemesfor inner products modulo a prime that are both efficient and recoverthe result whatever its size. Our framework consist of a cyclic group$G$ where the decision Diffie-Hellman assumption holds together with asubgroup $F$ of $G$ where the discrete logarithm problem is easy. Thissetting can be instantiated with class groups of imaginary quadraticfields, and allows us to encode information in the exponent of thesubgroup $F$, which can be efficiently recovered whatever its size.
Between 2016 and 2018, Astudillo, Diaz y Diaz, Friedman and Ramirez-Raposogave a complete classification of number fields with low regulator for anydegree $\leq 7$ and for totally real and totally complex octic fields,relying both on Remak-Friedman’s inequalities and on a procedurederived by an explicit formula for the Dedekind Zeta function.
In joint work with Giuseppe Molteni, we propose a conjectural improvementof the upper bounds for the discriminant which would allow, using thesame method, to give a classificaton for other signatures in degree 8 and9 : the main conjecture deals with the sharpest estimate for a factorwhich in fact depends on the signature of the fields.
The immediate goal is to create a web-based special functions reference workthat addresses some of the drawbacks of resources such as the NIST DigitalLibrary of Mathematical Functions, the Wolfram Functions site, and Wikipedia.A potential longer-term ambition is to provide a software library forsymbolic knowledge about special functions, usable by computer algebrasystems and theorem proving software.
This talk will discuss the motivation behind the project, design issues,and possible applications.
Ray class fields are generalizations of Hilbert class fields. For apositive integer $m > 0$, the ray class field $H_K(m)$ is obtained byrelaxing the ramification conditions for ideals of $\mathcal{O}_K$dividing $m$.
It turns out that there is a particular subfield $L(m)$ of $H_K(m)$which can be generated using special values of higher-level modularfunctions and Stark’s conjectures. For some values of $m$, this $L(m)$contains the Hilbert class field $H_K(1)$. Thus, we can compute theHilbert class field as a subfield of $L(m)$. In this talk, we find anupper bound for such an integer $m$.
If time permits, we will discuss how to compute the Hilbert class fieldas a subfield of this $L(m)$ when $m = 2$.
Nous parcourrons et ferons une synthèse des propositions actuelles à lacompétition du NIST. Nous nous intéresserons également aux primitives de signature àbase de codes, domaine sensiblement moins développé que le chiffrement.
In this talk, I will explain how we can determine the exact result ofthe SDP. Furthermore, we use these techniques to obtain exact resultsfor the kissing problem of the hemisphere in dimension 8 as well asfor other packing problems.
Using the exact rational solution for the kissing problem of thehemisphere, we can prove that the optimal kissing configuration isunique up to isometry.
This is a joint work with David de Laat and Philippe Moustrou.
On the cryptographic side, many algorithms based on lattices in factuse structured lattices, in order to improve the efficiency of theschemes. Most of the time, these structured lattices are $R$-modules ofsmall dimension, where $R$ is the ring a integers of some number field.It is then tempting to try and adapt the LLL algorithm, which worksover lattices (i.e., $\mathbb Z$-modules), to these $R$-modules.
All the previous works trying to solve this question focused on ringsof integers $R$ that were Euclidean, as the LLL algorithm over $\mathbb Z$crucially rely on the Euclidean division. In this talk, I willdescribe the first LLL algorithm which works in any ring of integers$R$. This algorithm is heuristic and runs in quantum polynomial time ifgiven access to an oracle solving the closest vector problem in afixed lattice, depending only on the ring of integers R.
This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet
On s’intéressera à de telles données compatibles avec l’action d’un groupe $G\subset\mathrm{SO}(E)$, et l’on décrira plus précisément deux situations :
$\bullet$ Action d’un groupe d’ordre 7 en dimension réelle $6$, une sorte d’appendice à l’article de Elkies paru dansThe Eightfold Way (The beauty of Klein’s quartic curve), Cambridge University Press (1999); S. Levy ed.
$\bullet$ Actions de groupes « pas trop petits » en dimension $4$ en relation avec les courbes de genre $2$.
Dans tous les cas l’ingrédient important est le théorème de Torelli.
Le calcul effectif de bases d’espaces de Riemann-Roch intervient dans de nombreux domaines pratiques, notamment pour l’arithmétique dans les jacobiennes de courbes ou dans des codes correcteurs d’erreurs algébraico-géométriques. Nous proposons une variante probabiliste de l’algorithme de Brill et Noether décrit par Goppa pour le calcul d’une base de l’espace de Riemann-Roch $L(D)$ associé à un diviseur $D$ d’une courbe projective plane nodale $C$ sur un corps parfait $k$ suffisamment grand.
On prouve que sa complexité (estimée par le nombre d’opérations arithmétiques dans le corps $k$ est en $O\big(\max\big(\deg(C)^{2\omega}, \deg(D^+)^{\omega}\big)\big)$ où $\omega > 2,38$ est la constante de l’algèbre linéaire et $D^+$ le plus petit diviseur effectif vérifiant $D^+ \geq D$. Cet algorithme probabiliste peut échouer mais sous quelques conditions, on prouve que sa probabilité d’échec est bornée par $O\big(\max\big(\deg(C)^4, \deg(D^+)^2\big)/|E|\big)$ où $E$ est un sous ensemble fini de $k$ dans lequel on peut choisir des éléments de $k$ uniformément aléatoirement.
À notre connaissance cette borne sur la complexité est la meilleure obtenue jusqu’alors pour le calcul d’espaces de Riemann-Roch dans un cadre général. Dans le contexte du calcul de la loi de groupe dans la jacobienne d’une courbe lisse, notre borne améliore aussi la meilleure borne connue à ce jour, due à Khuri-Makdisi. Notre algorithme jouit également du fait que son efficacité repose sur deux blocs pour lesquels des algorithmes efficaces existent : algèbre linéaire et arithmétique des polynômes univariés. Nous avons implémenté cet algorithme en C++/NTL. Les résultats expérimentaux obtenus via cette implémentation semblent indiquer une amélioration des temps de calcul par rapport à l’implémentation dans le logiciel de calcul formel Magma (jusqu’à 200 fois plus rapide sur certaines instances sur de grands corps finis).
We present a fully verified arbitrary-precision integer arithmetic library designed using the Why3 program verifier. It is intended as a verified replacement for the mpn layer of the state-of-the-art GNU Multi-Precision library (GMP).
The formal verification is done using a mix of automated provers and user-provided proof annotations. We have verified the GMP algorithms for addition, subtraction, multiplication (schoolbook and Toom-2/2.5), schoolbook division, divide-and-conquer square root and modular exponentiation. The rest of the mpn API is work in progress. The main challenge is to preserve and verify all the GMP algorithmic tricks in order to get good performance.
Our algorithms are implemented as WhyML functions. We use a dedicated memory model to write them in an imperative style very close to the C language. Such functions can then be extracted straightforwardly to efficient C code. For medium-sized integers (less than 1000 bits, or 100,000 for multiplication), the resulting library is performance-competitive with the generic, pure-C configuration of GMP.
The Cohen–Lenstra–Martinet heuristics are a probabilistic model for the behaviour of class groups of number fields in natural families. In this talk, I will discuss a generalisation to ray class groups. About 5 years ago, Varma determined the average number of 3-torsion elements in the ray class group of K with respect to m, when m is a fixed rational modulus, and K runs through the family of imaginary quadratic or of real quadratic fields. Since then, Bhargava has been challenging the community to come up with a natural probabilistic model that would explain the numbers obtained by Varma, and to predict more general averages in more general families of number fields. As I will explain in my talk, there turns out to be a very simple-minded way of doing so, and also a much more conceptual one, and they both turn out to be equivalent. The more conceptual one involves an object that does not appear to have been treated in the literature before, but that is very natural: the Aralelov ray class group of a number field. This is joint work with Carlo Pagano.
Étant donné une extension galoisienne de corps de nombres L/K, le théorème de Chebotarev affirme l’équirépartition des éléments de Frobenius, relatifs aux idéaux premiers non ramifiés, dans les classes de conjugaison de Gal(L/K). On présentera une étude portant sur les variations du terme d’erreur dans le théorème de Chebotarev, lorsque L/K parcourt certaines familles d’extensions. On donnera une formule de transfert pour les fonctions classiques de décompte des nombres (ou idéaux) premiers permettant de ramener la situation à celle d’une extension des rationnels. On exposera enfin quelques conséquences à des problèmes de “type Linnik” et à l’analogue du phénomène de biais de Chebyshev dans les corps de nombres. L’exposé porte sur un travail commun avec D. Fiorilli.
We will present a p-adic method to compute Galois representations attached to modular forms. This method compares very favourably to the better-known complex-analytic approach. The main ingredient is the use of “moduli-friendly” forms introduced by Makdisi, which allow us to evaluate modular forms at p-adic points of modular curves, and thus to compute in the Jacobian of modular curves without writing down any equations nor q-expansions.
We give an overview of the history of computing Galois groups over p-adic fields, with some diversions to recent progress over the rational field. We focus on the “resolvent method,” a family of techniques to compute Galois groups, and present a recent algorithm to do this in general over p-adic fields, the first of its kind. This algorithm greatly increases the degree of polynomial that can be routinely handled, and for example has been used to extend existing databases of Galois groups of p-adic fields to include all degree 18, 20 and 22 extensions of the 2-adic field. The implementation and tables of results are available on the speaker’s website.
Dans cet exposé, je vais expliquer comment tester efficacement et effectivement si deux représentations galoisiennes modulaires du groupe absolu de Galois des rationnels sont isomorphes. En particulier, je présenterai de nouvelles bornes optimales sur le nombre de traces à tester. Je discuterai également brièvement des graphes des isomorphismes, des résultats associés sur les algèbres de Hecke et de la construction d’une base de données de représentations.
Nous présentons un nouvel algorithme permettant de calculer les polynômes caractéristiques des $p$-courbures d’un opérateur différentiel à coefficients entiers pour tout $p$ premier inférieur à un entier $N$ donné, en temps quasi-linéaire, donc quasi-optimal, en $N$. L’algorithme présenté se base sur les travaux de A. Bostan, X. Caruso et E. Schost ramenant le calcul de cet invariant au calcul d’une factorielle de matrices, ainsi que sur la technique de calcul de factorielles développée par E. Costa, R. Gerbicz et D. Harvey.
Calcium is a C library for real and complex numbers in a form suitable for exact algebraic and symbolic computation. Numbers are represented as elements of fields $mathbb{Q}(a_1,ldots,a_n)$ where the extension numbers $a_k$ may be algebraic or transcendental. The system combines efficient field arithmetic with automatic construction of fields and ideals of algebraic relations, resulting in a practical computational model of $mathbb{R}$ and $mathbb{C}$ in which equality is rigorously decidable for a large class of numbers which includes $overline{mathbb{Q}}$ as a subset.
Étant donné un groupe algébrique $G$ agissant sur un espace affine $V$, il arrive que l’ensemble $V(mathbb{Z})/G(mathbb{Z})$ des orbites entières paramètre des objets arithmétiques et soit donc intéressant à énumérer. Une façon de s’y prendre consiste à expliciter un domaine fondamental de l’action de $G(mathbb{Z})$ sur $V(mathbb{R})$ et à y rechercher les points entiers. Pour cela, on peut essayer de recouvrir ce domaine fondamental par une famille de boules euclidiennes de rayon constant dont le cardinal soit du même ordre de grandeur que le nombre de points entiers. Je montrerai comment mettre en œuvre cette stratégie dans le cas simple de l’action à droite de $mathrm{GL}_n$ sur $mathrm{M}_n$, dont les orbites entières paramètrent les sous-modules de $mathbb{Z}^n$, et pour laquelle on dispose de domaines fondamentaux approchés faciles à décrire, à savoir les ensembles de Siegel.
We consider the problem of deciding whether two matrices are conjugate. If the coefficient ring is a field, this problem can be easily solved by using the Jordan normal form or the rational canonical form. For more general coefficient rings, the situation becomes increasingly challenging, both from a theoretical and a practical viewpoint. In this talk, we show how the conjugacy problem for integer matrices can be efficiently decided using techniques from group and number theory. This is joint work with Bettina Eick and Eamonn O’Brien.
Le biais de Chebyshev est un phénomène qui a été étudié tout d’abord dans le cadre des “courses de nombres premiers” (Rubinstein et Sarnak, 1994) pour expliquer la prédominance apparente des nombres premiers congrus à 3 mod 4 par rapport à ceux qui sont congrus à 1 mod 4. Ces courses de nombres premiers ont été généralisées notamment dans le contexte des corps de nombres par Ng en 2000. Dans des travaux récents, Fiorilli et Jouve ont étudié le biais de Chebyshev dans des familles d’extensions de corps de nombres, et mis en évidence des comportements limites de type “grandes déviations” et “théorème central limite”. Dans le travail présenté, je mets en évidence l’influence considérable qu’ont certains zéros réels de fonctions L d’Artin sur le biais de Chebyshev dans des extensions particulières de corps de nombres.
On présente une méthode effective de calcul sur les $p$-adiques d’isogénies entre courbes elliptiques et Jacobiennes de courbes hyperelliptiques de petit genre. Une application importante est le cas des courbes définies sur un corps fini de petite caractéristique, qui peut être résolu par relèvement dans les $p$-adiques. Notre algorithme repose sur la résolution d’équations différentielles avec un bon contrôle de perte de précision. Son analyse est basée sur la théorie de la précision différentielle développée par Caruso, Roe et Vaccon.
In recent years, the notion of rank metric in the context of coding theory has known many interesting developments in terms of applications such as space time coding, network coding or public key cryptography. These applications raised the interest of the community for theoretical properties of this type of codes, such as the hardness of decoding in rank metric or better decoding algorithms. Among classical problems associated to codes for a given metric, the notion of code equivalence has always been of the greatest interest. In this talk, we discuss the hardness of the code equivalence problem in rank metric for $mathbb F_{q^m}$–linear and general rank metric codes.
L’informatique quantique est de plus en plus un sujet brûlant, car elle promet bien des avantages, que ça soit pour la complexité de ses algorithmes, ou pour ce qu’elle permet en cryptographie. Dans cet exposé, nous allons d’abord voir les circuits quantiques : le modèle habituellement utilisé par les chercheurs et les ingénieurs pour décrire des processus quantiques. Nous nous intéresserons à une question fondamentale liée à ces circuits, celle de la complétude d’une théorie équationnelle. Nous présenterons ensuite le ZX-Calcul, un modèle issu de la théorie des catégories, qui répond, lui, positivement à cette même question.
Given a partial differential equation (PDE), its solutions can be difficult, if not impossible, to describe. The purpose of the Fundamental theorem of tropical (partial) differential algebraic geometry is to extract from the equations certain properties of the solutions. More precisely, this theorem proves that the support of the solutions in $k[[t_1, cdots, t_m]]$ (with $k$ a field of characteristic zero) can be obtained by solving a so-called tropicalized differential system.
We analyze complex multiplication for Jacobians of curves of genus 3, as well as the resulting Shimura class groups and their subgroups corresponding to Galois conjugation over the reflex field. We combine our results with numerical methods to find CM fields $K$ for which there exist both hyperelliptic and non-hyperelliptic curves whose Jacobian has complex multiplication by $mathbb{Z}_K$.
Toric codes, introduced by Hansen in 2002, generalize (weighted) Reed-Muller codes on other toric varieties than projective spaces. They consist of evaluation codes of monomials at tuples of non-zero coordinates, which correspond to the points on the dense torus contained in the associated toric variety. Our aim is to ‘projectivise’ these codes, in the same spirit that turns a Reed-Muller codes into a projective one: we consider codes obtained by evaluating global sections on the whole set of the rational points of a toric variety. We focus on simplicial toric varieties, which come with a nice quotient description, and we give an explicit construction of projective codes on them, as well as a combinatorial way to determine their parameters. ‘Projectivizing’ toric codes opens new possibilities of getting codes with excellent parameters, by extending some champion classical toric codes geometrically.
Les bornes de Sturm indiquent combien de coefficients de Fourier successifs suffisent à déterminer une forme modulaire. Pour les formes modulaires classiques, elles fournissent aussi des bornes sur le nombre d’opérateurs de Hecke engendrant l’algèbre du même nom. Cet exposé propose d’étudier la situation pour certaines formes automorphes, dites de Drinfeld, sur les corps de fonctions. Il s’agit d’un travail en commun avec Fu-Tsun Wei (National Tsing-Hua University, Taïwan).
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
This talk describes joint work with Clément Pernet on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.
La détermination effective des points entiers sur des variétés algébriques est un problème difficile, surtout en dimension plus grande que 1. Dans cet exposé, je présenterai brièvement deux approches naturelles pour les points entiers qui permettent dans des cas favorables de tous les trouver. En cherchant des raffinements de ces méthodes, on arrive à des problèmes combinatoires intéressants, que je mettrai en valeur dans le cas précis d’une variété “modulaire” de dimension 3, qu’on peut définir par une équation quartique dans $mathbb{P}^4$.
Motivés par la richesse de l’algorithme de Gauss qui permet de calculer efficacement les meilleurs approximation d’un nombre réel par des rationnels, beaucoup de mathématiciens ont proposé des généralisations de ces algorithmes pour approximer des vecteurs de dimension supérieure à 1. Citons pour exemple celui de Poincaré introduit à la fin du 19e siècle ou ceux de Brun et Selmer à la moitié du 20e siècle.
Depuis le début des années 90 à aujourd’hui il y a eu un certain nombre de travaux étudiant la convergence de ces algorithmes. Schweiger et Broise ont notamment démontré que les algorithmes de Selmer et Brun sont convergents et ergodiques. Plus surprenant peut-être, Nogueira a démontré que l’algorithme proposé par Poincaré ne convergeait presque jamais.
En partant du cas classique de l’algorithme de Farey, qui est une version “additive” de l’algorithme de Gauss, je présenterai un point de vu combinatoire sur ces algorithmes qui permet le passage d’une vision déterministe à une approche probabiliste. En effet, dans ce modèle, prendre un vecteur aléatoire pour la mesure de Lebesgue correspondra à suivre une marche aléatoire avec mémoire dans un graphe étiqueté nommé système simplicial. Les lois pour cette marche aléatoire sont élémentaires et nous pouvons développer des techniques probabilistes pour étudier leur comportement dynamique générique. Cela nous mènera à décrire un critère purement de théorie des graphes pour démontrer la convergence ou non d’un algorithme de fraction continue.
Let $E/F$ be an elliptic curve defined over a number field $F$, and let $K/F$ be a quadratic extension. If the analytic rank of $E(K)$ is one, one can often use Heegner points (or the more general Darmon points) to produce (at least conjecturally) a nontorsion generator of $E(K)$. If the analytic rank of $E(K)$ is larger than one, the problem of constructing algebraic points is still very open. In very recent work, Michele Fornea and Lennart Gehrmann have introduced certain $p$-adic quantities that may be conjecturally related to the existence of these points. In this talk I will explain their construction, and illustrate with some numerical experiments that we have been able to carry out that support their conjecture. This is joint work with Michele Fornea and Xevi Guitart.
We study property (FA) and its hereditary version for unit groups of $2$-by-$2$ matrices over orders in totally definite quaternion algebras with rational centres. In particular we consider the three matrix rings over totally definite rational quaternion algebras that can appear as Wedderbrun-Artin components of a group ring $mathbb{Q}G$.
A key step is the construction of amalgamated decompositions of the elementary group $E_2(mathcal O)$, where $mathcal O$ is an order in rational division algebra, and of certain arithmetic groups $Gamma$. The methods for the latter turn out to work in much greater generality and most notably are carried out to obtain amalgam decompositions for the higher modular groups $SL_+(Gamma_n(mathbb Z))$, with $nle 4$, which can be seen as higher dimensional versions of modular and Bianchi groups.
Let $K$ be a quadratic field which is not an imaginary quadratic field of class number one. We describe an algorithm to compute a superset of the set of primes $p$ for which there exists an elliptic curve over $K$ admitting a $K$-rational $p$-isogeny. Combining this algorithm with recent work on the determination of quadratic points on low-genus modular curves, we determine - conditional upon the Generalised Riemann Hypothesis - the above set of isogeny primes for several quadratic fields, providing the first such examples after Mazur’s 1978 determination for $K = mathbb{Q}$. We will give a live demo of the Sage and PARI/GP implementations of the algorithm.
In this talk, we will try to review the state-of-the-art of the algorithms for solving the Euclidean lattice problems underlying cryptography. In more details, this talk contains two parts. In the first part, we will focus on the lattice problems such as approximate Shortest Vector Problem (approx-SVP) and the lattice reduction algorithms as the best known solving algorithms so far. In particular, I will present an improved enumeration-based lattice reduction algorithm, which is shown to be (potentially) relevant to cryptanalysis. In the second part, we will instead consider a quantum problem that is computationally equivalent to approx-SVP. By directly solving a quantum problem, we may expect to have a more powerful use of the quantum computation. However, the best known algorithms for solving approx-SVP via solving this quantum problem, is not better than lattice reduction yet.
Pairings on elliptic curves are involved in signatures, NIZK, and recently in blockchains (ZK-SNARKS). These pairings take as input two points on an elliptic curve $E$ over a finite field, and output a value in an extension of that finite field. Usually for efficiency reasons, this extension degree is a power of 2 and 3 (such as 12, 18, 24), and moreover the characteristic of the finite field has a special form. The security relies on the hardness of computing discrete logarithms in the group of points of the curve and in the finite field extension.
In 2013-2016, new variants of the function field sieve and the number field sieve algorithms turned out to be faster in certain finite fields related to pairing-based cryptography, in particular those which had a very efficient arithmetic. Now small characteristic settings are discarded. The situation for $GF(p^k)$ where $p$ is prime and $k$ is small is still quite unclear. We refine the work of Menezes-Sarkar-Singh and Barblescu-Duquesne to estimate the cost of a hypothetical implementation of the Special-Tower-NFS in $GF(p^k)$ for small $k$, and deduce parameter sizes for cryptographic pairings.
Joint work with Shashank Singh, IISER Bhopal, India.
References
On the alpha value of polynomials in the tower number field sieve algorithm, Aurore Guillevic and Shashank Singh, Mathematical Cryptology, Vol 1 No 1 (Feb 2021), journal version, preprint.
A short list of pairing-friendly curves at the 128-bit security level, Aurore Guillevic, presented at PKC’2020 recorded talk, ePrint 2019/1371.
Implementation available with MIT licence on gitlab. Alpha in Magma, alpha and TNFS simulation in SageMath.
Les algorithmes de factorisation d’entiers et ceux de calcul de logarithmes discrets, adaptés aux tailles cryptographiques, ont deux étapes pertinentes pour notre exposé : la sélection polynomiale et la cofactorisation. La première consiste à sélectionner deux polynômes homogènes $F(x,y)$ et $G(x,y)$ dans $mathbb{Z}[x,y]$ tels que les entiers de l’ensemble ${F(a,b)G(a,b)mid a,bin ext{ un rectangle },gcd(a,b)=1 }$ contiennent le plus possible d’entiers $B$-friables (ayant tous les facteurs premiers inférieurs à $B$). La deuxième consiste à factoriser des entiers de la forme $F(a,b)$ et $G(a,b)$.
P. Montgomery (1986) a accéléré la cofactorisation en utilisant des courbes elliptiques correspondant à des points rationnels sur certaines courbes modulaires. Le programme de Mazur peut s’énoncer comme suit : étant donné un corps de nombres $K$, borner le niveau des courbes modulaires qui ont des points $K$-rationnels et calculer effectivement ces points. Des travaux de Rouse, Sutherland, Zureick-Brown et Zywina (2016,2017) ont résolu partiellement le cas où $K=mathbb{Q}$.
Le progrès récent sur le programme B de Mazur (2019-2021) répond partiellement à plusieurs questions a) les points rationnels des courbes modulaires ayant un nombre fini de points; b) les courbes ayant un niveau composé c) les points $K$-rationnels pour les corps $K$ quadratiques d) les points de torsion sur des corps de nombres.
Nous proposons une modification mineure de l’étape de sélection polynomiale. L’état de l’art consiste à construire un grand nombre de polynôme par les méthodes de Kleinjung (2006) et de garder ceux qui optimisent la fonction $alpha$ de Murphy (2000). Ceci correspond de manière empirique à augmenter la probabilité que $F(a,b)$ et $G(a,b)$ soient $B$-friables. Nous proposons de prendre en compte l’existence de familles de courbes elliptiques qui permettent de factoriser rapidement des entiers de la forme $F(a,b)$. Cela revient à décrire les corps de nombres $K$ de degré donné, par exemple $2$, admettant des courbes modulaires qui ont des points $K$-rationnels.
Cette séance spéciale est dédiée à l’accueil des stagiaires dans l’équipe LFANT. Après une présentation générale de l’équipe, nous présenterons deux logiciels que nous développons : PARI/GP et Arb.
We propose several constructions for the original multiplication algorithm of D.V. and G.V. Chudnovsky in order to improve its scalar complexity. We highlight the set of generic strategies who underlay the optimization of the scalar complexity, according to parameterizable criteria. As an example, we apply this analysis to the construction of type elliptic Chudnovsky2 multiplication algorithms for small extensions. As a case study, we significantly improve the Baum-Shokrollahi construction for multiplication in F256/F4.
Nous nous intéresserons aux de courbes de genre 2 dont la Jacobienne est géométriquement le produit de deux courbes elliptiques avec multiplication complexe par le même ordre (maximal). Nous proposerons un algorithme permettant de compter combien d’entre elles ont pour corps de modules Q. Pour cela nous développerons une équivalence de catégories entre certaines variétés abéliennes polarisées et des réseaux hermitiens. Il s’agit d’une généralisation d’un article de A. Gélin, E. Howe et C. Ritzenthaler de 2018 dans lequel la Jacobienne est le carré d’une même courbe elliptique.
The explicit determination of perfect powers in (shifted) non-degenerate, integer, binary linear recurrence sequences has only been achieved in a handful of cases. In this talk, we combine bounds for linear forms in logarithms with results from the modularity of elliptic curves defined over totally real fields to explicitly determine all shifted powers by two in the Fibonacci sequence. A major obstacle that is overcome in this work is that the Hilbert newspace which we are interested in has dimension 6144. We will focus on how this space is computationally handled with respect to the underlying Diophantine equation. This is joint work with Mike Bennett (UBC) and Samir Siksek (Warwick).
Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. The Rank Support Learning problem (RSL) is a variant where an attacker has access to N decoding instances whose errors have the same support and wants to solve one of them. This problem is for instance used in the Durandal signature scheme. In this talk, we will present a new algebraic attack on RSL. We build upon Bardet et al., Asiacrypt 2020, where similar techniques are used to solve MinRank and RD. However, our analysis is simpler and overall our attack relies on very elementary assumptions compared to standard Gröbner bases attacks. In particular, our results show that key recovery attacks on Durandal are more efficient than was previously thought. This is joint work with Magali Bardet.
To an algebraic curve $C$ over the complex numbers one can associate a non-negative integer $g$, the genus, as a measure of its complexity. One can also associate to $C$, via complex analysis, a $g imes g$ symmetric matrix $Omega$ called period matrix. Because of the natural relation between $C$ and $Omega$, one can obtain information about one by studying the other. Therefore, it makes sense to consider the inverse problem: Given a period matrix $Omega$, can we compute a model for the associated curve $C$?
In this talk, we will revise how to construct the period matrix of a curve, we will see some known results around this problem, and discuss an application of its solution.
In this talk, I will study the general problem of when the value of the hypergeometric function $F(a,b;c;z)$ is algebraic, assuming $a$,$b$,$c$, and $z$ rational. The results involve modular forms and functions, complex multiplication, Shimura curves, and computer searches.
L’algorithme de Satoh de comptage de points sur les courbes elliptiques permet d’obtenir (après des améliorations de Harvey) une complexité quasi-quadratique en le degré pour une (petite) caractéristique fixée $p$. Dans cet exposé je passerai en revue plusieurs variantes de cet algorithme et ses extensions aux variétés abéliennes. J’expliquerai ensuite comment on peut grandement simplifier l’implémentation de cet algorithme. L’implémentation dans Pari/GP du nouvel algorithme produit un gain d’un facteur 30 à la fois de temps de calcul et de consommation mémoire.
We show a method to sample an element alpha from a given ideal I, such that their quotient ideal (alpha)/I is a (possibly large) prime times a smooth number (‘near-prime’) with reasonable probability. This method consists of ‘randomizing’ the ideal I by multiplying it with small primes (yielding J) and consequently sampling the element alpha from this randomized ideal J intersected with a large box. The probability that the quotient (alpha)/J is prime (i.e., that the quotient (alpha)/I is a near-prime) is tightly related to density results on prime ideals (prime ideal theorem). As an application we show an efficient way to compute power residue symbols for varying degree number fields.
We will present the signature scheme SQISign, (for Short Quaternion and Isogeny Signature) exploiting isogeny graphs of supersingular elliptic curves. The signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. Its efficient implementation and security analysis open new research challenges.
When $L/K$ is a Galois extension of number fields with Galois group $G$, some invariants of $L$ can be related to those of its proper subfields. I will present some old and some new such relations, and an application to the computation of class groups of some large number fields. This is joint work with Jean-François Biasse, Claus Fieker and Tommy Hofmann.
This work contributes in the field of lattice-based cryptography, a research domain of public key cryptography that was initiated at the end of the 1990s by two different branches. On the one had, there have been proposals benefiting from strong theoretical connections to presumed hard worst-case lattice problems, leading to the development of public key cryptography based on the SIS (Short Integer Solution) and LWE (Learning With Errors) problems. On the other hand, very efficient schemes basing their security on average-case structured lattice problems have been introduced, the most popular among them is the NTRU encryption scheme.
Following the latter approach, Hoffstein and Silverman introduced in 2015 a public key encryption scheme called PASS Encrypt. It is very efficient and fulfills additive and multiplicative homomorphic properties. Unfortunately, a main problem with PASS Encrypt to date is that its security is not well understood, no proof of security was given with respect to the hardness of explicit computational problems, and the scheme is deterministic and hence cannot satisfy the standard notion of IND-CPA security.
In the presented work, we make progress towards understanding the hardness assumptions needed to prove the security of PASS Encrypt. We study the Partial Vandermonde Knapsack problem (PV-Knap) and emphasize its connection to (average-case) ideal lattices. We enlarge the landscape of problems that use the partial Vandermonde matrix by defining a new variant of LWE, called Partial Vandermonde Learning With Errors (PV-LWE). Later, we show the equivalence of PV-Knap and PV-LWE by exploiting the same duality connection as we have for standard Knapsack problems and LWE. In order to provide a security proof for PASS Encrypt, we need to define a variant of PV-Knap, that we call the PASS problem. This problem serves (together with the decision version of PV-Knap) as the underlying hardness assumption for (a slightly modified version of) PASS Encrypt. Furthermore, we present the scheme together with the security proof. We conclude the presentation with some interesting open questions regarding problems using the partial Vandermonde transform.
This is joint work with Amin Sakzad and Ron Steinfeld, currently under submission.
The Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019, was introduced in 2020. The authors performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting exact approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-S-unit lattice, which requires classical subexponential time.
In this work, we extend these experiments to 210 cyclotomic fields of any conductor m and of degree up to 210. Building upon new results from Bernard and Kucera on the Stickelberger ideal, we construct a maximal set of independent S-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-S-unit sublattices fulfilling the role of approximating the full Twisted-PHS lattice. Notably, our obtained approximation factors match those from the Twisted-PHS team in small dimensions, when it is feasible to compute the original log-S-unit lattice.
As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h_{m}^{+}leq O(sqrt{m})$.
The full paper is available on ePrint:2021/1384. This is joint work with Andrea Lesavourey, Tuong-Huy Nguyen and Adeline Roux-Langlois.
Le type d’ordre d’une séquence de points du plan est une généralisation de la permutation associée à une séquence de nombres réels. Cette structure combinatoire encode de nombreuses propriétés géométriques de la séquence de points, par exemple le treillis des faces de son enveloppe convexe, ou encore les triangulations qu’elle supporte.
Cet exposé commencera par une rapide introduction à ces objets. Je discuterai ensuite d’un phénomène de concentration qui apparaît lorsque l’on lit les types d’ordres de séquences de points aléatoires, pour divers modèles naturels. Cette concentration rend difficile une bonne exploration aléatoire de ces structures.
Ceci est un travail conjoint avec Emo Welzl (article ici et ici).
We present a new data structure to approximate accurately and efficiently a polynomial $f$ of degree $d$ given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than $rac{1}{2}$ or greater than $2$.
Let $K$ be a number field and $A/K$ an abelian surface. By the Mordell-Weil theorem, the group of $K$-rational points on $A$ is finitely generated and as for elliptic curves, its rank is predicted by the Birch and Swinnerton-Dyer conjecture. A basic consequence of this conjecture is the parity conjecture: the sign of the functional equation of the $L$-series determines the parity of the rank of $A/K$.
Assuming finiteness of the Shafarevich-Tate group, we prove the parity conjecture for principally polarized abelian surfaces under suitable local constraints. Using a similar approach we show that for two elliptic curves $E_1$ and $E_2$ over $K$ with isomorphic $2$-torsion, the parity conjecture is true for $E_1$ if and only if it is true for $E_2$. In both cases, we prove analogous unconditional results for Selmer groups.
In this talk we will first review the classical criteria to determine the (stable) reduction type of elliptic curves (Tate) and of genus 2 curves (Liu) in terms of the valuations of some particular combinations of their invariants. We will also revisit the theory of cluster pictures to determine the reduction type of hyperelliptic curves (Dokchitser’s et al.). Via Mumford theta constants and Takase and Tomae’s formulas we will be able to read the cluster picture information by looking at the valuations of some (à la Tsuyumine) invariants in the genus 3 case. We will also discuss the possible generalization of this strategy for any genus and some related open questions.
Les codes de Reed-Solomon sont largement utilisés pour représenter des données sous forme de vecteurs, de sorte que les données peuvent être récupérées même si certaines coordonnées des vecteurs sont corrompues. Ces codes ont de nombreuses propriétés. Leurs paramètres sont optimaux. Ils permettent de reconstruire des coordonnées qui ont été effacées. Ils sont compatibles avec l’addition et la multiplication de données. Néanmoins, ils souffrent de certaines limitations. Notamment, la taille de stockage des coordonnées des vecteurs augmente de manière logarithmique avec le nombre de coordonnées. Les codes dits géométriques généralisent les codes de Reed-Solomon en bénéficiant des mêmes propriétés, tout en étant libres de ces limitations. Par conséquent, l’utilisation de codes géométriques apporte des gains de complexité, et s’avère utile dans plusieurs applications telles que le calcul distribué sur les secrets et les preuves zero-knowledge. Les codes géométriques sont construits en évaluant des familles de fonctions, appelées espaces de Riemann-Roch, en les points rationnels d’une courbe. Il s’ensuit que le calcul de ces espaces est crucial pour la mise en œuvre des codes géométriques. Dans cet exposé, je présenterai un travail récent en collaboration avec S. Abelard, A. Couvreur et G. Lecerf sur le calcul effectif des bases des espaces de Riemann-Roch de courbes. Après avoir révisé l’état de l’art sur le sujet, je discuterai des idées à la base de notre algorithme, en particulier la théorie de Brill-Noether et l’utilisation des expansions de Puiseux. Les courbes utilisées dans la construction des codes géométriques sont pour la plupart limitées à celles pour lesquelles les bases de Riemann-Roch sont déjà connues. Ce nouveau travail et ceux qui suivront, permettront la construction de codes géométriques à partir de courbes plus générales.
Oriented Supersingular Isogeny Diffie-Hellman (OSIDH) est un échange de clé post-quantique proposé par Leonardo Colò et David Kohel en 2019. La construction repose sur l’action du groupe de classe d’un ordre quadratique imaginaire sur un espace de courbes elliptiques supersingulières et peut donc être vue comme une généralisation du célèbre échange de clé à base d’isogénies CSIDH. Cependant, OSIDH est très différent de CSIDH d’un point de vue algorithmique parce qu’OSIDH utilise des groupes de classe plus structurés que CSIDH. Comme l’ont reconnu Colò et Kohel eux-mêmes, cela rend OSIDH plus vulnérable aux attaques. Pour contourner cette faiblesse, ils ont proposé une façon ingénieuse d’effectuer l’échange de clé en échangeant de l’information sur l’action du groupe de classe au voisinage des courbes publiques, et ont conjecturé que cette information additionnelle n’impacterait pas la sécurité.
Dans cet exposé, on réévaluera la sécurité d’OSIDH en proposant une nouvelle attaque, inspirée des travaux précédents d’Onuki. Notre attaque est exponentielle mais parvient à casser les paramètres choisi par Colò et Kohel, contrairement à l’attaque d’Onuki. On verra aussi des contremesures possibles à cette attaque, dont on analysera l’impact sur OSIDH d’un point de vue de l’efficacité et de la fonctionnalité.
Les fonctions thêta permettent de relier les points de vue algébrique et analytique dans l’étude des variétés abéliennes: ce sont des formes modulaires de Siegel qui fournissent des coordonnées sur ces variétés et leurs espaces de modules. Rendre ce lien effectif nécessite un algorithme efficace d’évaluation de ces fonctions thêta en un point. Dupont, dans sa thèse (2006), a décrit un algorithme heuristique basé sur la moyenne arithmético-géométrique (AGM) et un schéma de Newton pour évaluer certaines fonctions thêta en genre 1 et 2 en temps quasi-linéaire en la précision. Le but de cet exposé est de montrer que l’on peut en fait obtenir un algorithme certifié dont la complexité est uniforme. Je discuterai également des obstacles restants pour généraliser ce résultat en dimension supérieure.
F. Oort showed that the moduli space of principally polarized supersingular abelian surfaces is a union of rational curves. This is proven by showing that every principally polarized supersingular abelian surface is the Jacobian of a fibre of one of the families of genus 2 curves $pi: mathcal{C}ightarrow mathbb{P}^1$ constructed by L. Moret-Bailly. We present an algorithm that makes this construction effective: Given a point $xin mathbb{P}^1$ we compute a hyperelliptic model of the fibre $pi^{-1}(x)$. The algorithm uses Mumford’s theory of theta groups to compute quotients by the group scheme $alpha_p$.
$\newcommand{F}{mathbb{F}}$Satoh’s algorithm for counting the number of points of an elliptic curve $E/F_q$ with $q=p^n$ is the fastest known algorithm when $p$ is fixed: it computes the invertible eigenvalue $λ$ of the Frobenius to $p$-adic precision $m$ in time $ ilde{O}(p^2 n m)$. Since by Hasse’s bound, recovering $chi_{pi}$ requires working at precision $m=O(n)$, the point counting complexity is of $ ilde{O}(p^2 n^2)$, quasi-quadratic in the degree $n$.
Unfortunately, the term $p^2$ in the complexity makes Satoh’s algorithm suitable only for smaller $p$. For medium sized $p$, one can use Kedlaya’s algorithm which cost $ ilde{O}(p n^2 m)$ or a variant by Harvey’s which cost $ ilde{O}(p^{1/2} n^{5/2} m + n^4 m)$, which have a better complexity on $p$ but a worse one on $n$. For large $p$, the SEA algorithm costs $ ilde{O}(log^4 q)$.
In this talk, we improve the dependency on $p$ of Satoh’s algorithm while retaining the dependency on $n$ to bridge the gap towards medium characteristic. We develop a new algorithm with a complexity of $ ilde{O}(p n m)$. In the particular case where we are furthermore provided with a rational point of $p$-torsion, we even improve this complexity to $ ilde{O}(p^{1/2} n m)$.
This is a joint work with Abdoulaye Maiga.
Solving polynomials is a fundamental computational problem in mathematics. In the real setting, we can use Descartes’ rule of signs to efficiently isolate the real roots of a square-free real polynomial. In this talk, we show how to translate this method into the p-adic worlds. We show how the p-adic analog of Descartes’ rule of signs, Strassman’s theorem, leads to an algorithm to isolate the p-adic roots of a square-free p-adic polynomial and provide some complexity estimates adapting the condition-based complexity framework from real/complex numerical algebraic geometry to the p-adic case.
Dans cet exposé nous allons décrire une approche explicite qui permet de calculer les types automorphes inertiels pour ${m GL}_2$. Nous donnerons ensuite quelques applications de cet algorithme à des problèmes diophantiens ou de nature arithmétique.
In a recent paper Don Zagier mentions a mysterious integer sequence $(a_n) _{n geq 0}$ which arises from a solution of a topological ODE discovered by Marco Bertola, Boris Dubrovin and Di Yang. In my talk I show how to conjecture, prove and even quantify that $(a_n) _{n geq 0}$ actually admits an algebraic generating function which is therefore a very particular period. The methods are based on experimental mathematics and algorithmic ideas in differential Galois theory, which I will show in the interactive part of the talk. The presentation is based on joint work with A. Bostan and J.-A. Weil.
Il s’agit d’un travail en collaboration avec P. Cho, Y. Lee et A. Södergren. Depuis les travaux de Davenport-Heilbronn, beaucoup d’articles ont été ecrits donnant des estimations de plus en plus précises sur le comptage du nombre de corps cubiques de discriminant au plus X. Mentionnons par exemple les travaux de Belabas, Belabas-Bhargava-Pomerance, Bhargava-Shankar-Tsimerman, Taniguchi-Thorne et Bhargava-Taniguchi-Thorne. Dans cet exposé je parlerai d’un résultat négatif, qui montre que l’hypothèse de Riemann implique une limitation sur la plus petite taille possible du terme d’erreur dans ces estimations. Nous approchons la questions à partir de la théorie des petits zéros de fonctions $L$, en particulier la philosophie de Katz-Sarnak et les articles subséquents pour la famille des fonctions zeta de Dedekind de corps cubiques. Je présenterai aussi des résultats numériques obtenus avec pari/gp et le programme «cubic» de Belabas qui indiquent que notre résultat pourrait être optimal.
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds.
This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance.
In this work, we provide generic realisations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the Lattice Isomorphism Problem (LIP). More specifically, we provide:
- a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).
- a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.
- a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP.
The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski’s bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to a poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski’s bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Computing the class group and the unit group of a number field is a famous problem of algorithmic number theory. Recently, it has also become an important problem in cryptography, since it is used in multiple algorithms related to algebraic lattices.
Subexponential time algorithms are known to solve this problem in any number fields, but they heavily rely on heuristics. The only non-heuristic (but still under ERH) known algorithm, due to Hafner and McCurley, is restricted to imaginary quadratic number fields.
In this talk, we will see a rigorous subexponential time algorithm computing units and class group (and more generally S-units) in any number field, assuming the extended Riemann hypothesis.
This is a joint work with Koen de Boer and Benjamin Wesolowski.
Nous donnerons un panorama de certaines techniques et résultats pour le calcul de la cohomologie des groupes arithmétiques de rang $ge 4$ pour des anneaux d’entiers algébriques, ainsi que leurs applications arithmétiques et K-théoriques. Nous ferons ensuite un focus sur les méthodes utilisant le modèle de Voronoi (euclidien ou hermitien), ainsi que plusieurs améliorations algorithmiques. Nous préciserons certains résultats relatifs aux complexes de Voronoi et leurs cellules (pour $mathrm{GL}_N$ avec $N<12$), ainsi qu’un travail en cours avec B. Allombert et R. Coulangeon sur les formes parfaites de rang $N$ sur $mathcal{O}_K$ et la cohomologie de $mathrm{GL}_N(mathcal{O}_K)$ pour certains anneaux d’entiers avec $N=4,5,6$. Nous mentionnerons aussi plusieurs problèmes ouverts relatifs à ces modèles.
L’une des pierres angulaires de la cryptographie des isogénies est l’action (dite CRS), simplement transitive, du groupe des classes d’un ordre d’un corps quadratique imaginaire, sur un certain ensemble de classes d’isomorphismes de courbes elliptiques ordinaires.
L’échange de clé non-interactif basé sur cette action (espace homogène difficile) est relativement lent (de Feo, Kieffer, Smith, 2019) ; la structure du groupe (Beullens, Kleinjung, Vercauteren, 2019) est difficile à calculer. Pour palier à cela, nous décrivons une action, simplement transitive, de la jacobienne d’une courbe hyperelliptique imaginaire, sur un certain ensemble de classes d’isomorphismes de modules de Drinfeld.
Après avoir motivé l’utilisation des modules de Drinfeld en lieu et place des courbes elliptiques, nous décrirons un algorithme efficace de calcul de l’action, ainsi que la récente attaque de Benjamin Wesolowski sur l’échange de clé donné par l’action.
FastECPP is currently the fastest approach to prove the primality of general numbers, and has the additional benefit of creating certificates that can be checked independently and with a lower complexity. It crucially relies on the explicit construction of elliptic curves with complex multiplication.
I will take you on a leisurely stroll through the different phases of the ECPP and fastECPP algorithms, with explanations of their complexity. We will then see the algorithmic choices I have made when integrating a parallelised implementation of fastECPP into my CM software, which has recently been used to prove the primality of a number of record size 50000 digits.
Let $K = mathbb{Q}(alpha_1,dots,alpha_k)$ be an algebraic number field. We are interested in computing polynomial GCDs in $K[x]$ and $K[x_1,dots,x_n]$. Of course we also want to multiply, divide and factor polynomials over $K$. In $K[x]$ we have the Euclidean algorithm but it “blows up”; there is a growth in the size of the rational numbers in the remainders. It is faster to compute the GCD modulo one or more primes and use the Chinese remainder theorem and rational number reconstruction. This leads to computing a GCD in $R[x]$ where $R = K mod p$ is usually not be a field – it is a finite ring.
How do Computer Algebra Systems represent elements of $K$? How do Computer Algebra Systems compute GCDs in $K[x]$? What is the best way to do arithmetic in $R$? How can we compute a polynomial GCD in $K[x_1,dots,x_n]$? In the talk we will try to answer these questions and we will present some timing benchmarks comparing our own C library for computing GCDs in $R[x]$ with Maple and Magma.
SIDH/SIKE was a post quantum key exchange mechanism based on isogenies between supersingular elliptic curves which was recently selected in July 5 2022 by NIST to advance to the fourth round of the PQC competition. It was soon after broken during the summer in a series of three papers by Castryck-Decru, Maino-Martindale and myself.
The attacks all use the extra information on the torsion points used for the key exchange. We first review Petit’s dimension 1 torsion point attack from 2017 which could only apply to unbalanced parameters. Then we explain how the dimension 2 attacks of Maino-Martindale and especially Castryck-Decru could break in heuristic (but in practice very effective) polynomial time some parameters, including the NIST submission where the starting curve $E: y^2=x^3+x$ has explicit endomorphism $i$.
Finally we explain how by going to dimension 8, we could break in proven quasi-linear time all parameters for SIKE.
We will explain how the SIDH protocol worked at the beginning of the talk. We will see that the attack ultimately relies on a very simple 2x2 matrix computation! There will also be (hopefully) fun memes during the talk!
Over a decade ago, Arnold Schönhage proposed a method to compute elementary functions (exp, log, sin, arctan, etc.) efficiently in “medium precision” (up to about 1000 digits) by reducing the argument using linear combinations of pairs of logarithms of primes or Gaussian primes. We generalize this approach to an arbitrary number of primes (which in practice may be 10-20 or more), using an efficient algorithm to solve the associated Diophantine approximation problem. Although theoretically slower than the arithmetic-geometric mean (AGM) by a logarithmic factor, this is now the fastest algorithm in practice to compute elementary functions from about 1000 digits up to millions of digits, giving roughly a factor-two speedup over previous methods. We also discuss the use of optimized Machin-like formulas for simultaneous computation of several logarithms or arctangents of rational numbers, which is required for precomputations.
The method found by W. Castryck and T. Decru to break SIDH requires computing (2^n,2^n)-isogenies from a product of elliptic curves to another abelian surface (which is also a product), which are realized as degree 2 correspondences between curves.
Transposing the attack to the other side of the SIDH exchange involves degree (3,3) isogenies that can be evaluated using either theta functions, or divisors on genus 2 curves. Methods for the curve approach exist for the Jacobian case, but the case of a product of elliptic curves (Bröker, Howe, Lauter, Stevenhagen 2014) can be difficult to implement for cryptographically relevant field sizes due to various limitations in CAS such as SageMath/Singular.
I will explain how traditional algebraic geometry can be called to the rescue to give a simple construction of the curve correspondence associated to the quotient of E_1 x E_2 by an isotropic (3,3) kernel. This leads to a rather fast computation method relying only on elementary field operations and 2 square roots. The journey will bring back some memories of 19th century projective geometry. Theta function experts might recognize familiar objects in the geometric construction.
In our article, we generalize the works of Pan et al. (Eurocrypt’21) and Porter et al. (ArXiv’21) and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we generalize it to any ideal (whose prime factors are not ramified) of any number field. We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time.
We explain how the « embedding lemma » used in the recents attacks against SIDH can be used constructively. Namely we show that every $N$-isogeny between abelian varieties over a finite field admits an efficient representation allowing for its evaluation in time polylogarithmic in $N$. Furthermore, using Vélu’s formula for elliptic curves, or isogenies in the theta model for dimension $g>1$, this representation can be computed in time quasi-linear in $N^g$.
Fixons un entier $n geq 2$, et, pour $X geq 0$, soit $C_n(X)$ l’ensemble des classes d’isomorphisme de corps de nombres de degré $n$ et de discriminant inférieur à $X$ en valeur absolue. La méthode de Hunter-Pohst permet d’énumérer $C_n(X)$ en temps $O(X^{rac{n + 2}{4} + epsilon})$. Pour $n geq 3$, on s’attend à ce que cette complexité ne soit pas optimale : en effet, une conjecture classique, démontrée pour $n leq 5$, prévoit qu’il existe une constante $c_n > 0$ telle que le cardinal de $C_n(X)$ soit équivalent à $c_n X$. En utilisant une paramétrisation des corps cubiques due à Davenport et Heilbronn, Belabas a mis au point un algorithme énumérant $C_3(X)$ en temps optimal $O(X^{1 + epsilon})$. Je montrerai comment une paramétrisation des corps quartiques due à Bhargava permet de manière similaire d’énumérer $C_4(X)$ en temps $O(X^{rac{5}{4} + epsilon})$. Je présenterai ensuite des résultats numériques, ainsi que des perspectives d’amélioration et de généralisation en degré supérieur.
I will describe with numerous examples a new Pari/GP package for infinite continued fractions which can in particular compute numerically the limit, the exact asymptotic speed of convergence (almost never given in the literature), accelerate continued fractions, and especially apply the powerful Apéry acceleration technique to almost all continued fractions, leading to hundreds of new ones.
The security of many existing cryptographic systems relies on the difficulty of solving the discrete logarithm problem (DLP) in a group. For a generic group, we can solve this problem with many algorithms such as the baby-step-giant-step, the Pollard-rho or the Pohlig-Hellman algorithm. For a group with known structure, we use the index calculus algorithm to solve the discrete logarithm problem. Then, the DLP on the Jacobian of a hyperelliptic curve defined over a finite field $mathbb{F}_{q^n}$ with $n>1$ are subject to index calculus attacks. After having chosen a convenient factor basis, the index calculus algorithm has three steps - the decomposition step in which we decompose a random point in the factor basis, the linear algebra step where we solve a matricial equation and the descent phase in which the discrete logarithm is deduced. The complexity of the algorithm crucially depends on the size of the factor basis, since this determines the probability for a point to be decomposed over the base and also the cost of the linear algebra step. Faugère et al (EC 2014) exploit the $2$-torsion point of the curve to reduce the size of the factor basis and then improve the complexity of the index calculus algorithm. In a similar manner, we exploit the endomorphism of the Jacobian to reduce the size of the factor base for certain families of ordinary elliptic curves and genus $2$ hyperelliptic Jacobians defined over finite fields. This approach adds an extra cost when performing operation on the factor basis, but our benchmarks show that reducing the size of the factor base allows to have a gain on the total complexity of index calculus algorithm with respect to the generic attacks.
Outsourcing IT services has become very common worldwide for multiple reasons ranging from costs reduction to improved services. Whatever the actual reason is, the concrete consequence for the company that delegates such services is that a third party ends up with its data in clear because of the well-known limitations of standard encryption.
Ideally, this third party should only learn the minimal information necessary for performing the requested processing, which has motivated the design of countless encryption schemes compatible with specific processing. Such schemes belong to the realm of functional encryption, where the third party recovers a function f(x) from an encryption of x without learning anything else about x, with minimal interaction. Of course, the function f, and hence the encryption scheme, strongly depends on the considered application, which explains the profusion of papers related to this topic. We will focus on the possibility to allow a third party to search the presence of chosen substrings of different lengths (and more !) at any position in the encryption of a stream of data. After an introduction to this problematic and to the associated security notion, we will take a look at the proof of security of one specific construction.
Filtered $(varphi,N)$-modules over a $p$-adic field $K$ are semi-linear objects which are easy to define and can be implemented on a computer. The modules $D_{st}(V)$ defined by $p$-adic Hodge theory, where $V$ is a $p$-adic representation of the absolute Galois group of $K$, provide examples of filtered $(varphi,N)$-modules. When $V$ is nice enough (semi-stable), the data of $D_{st}(V)$ is sufficient to recover $V$. A necessary and sufficient condition for a filtered $(varphi,N)$-module $D$ to be written as $D_{st}(V)$ for some semi-stable representation $V$ is the condition of ``admissibility’’ which imposes conditions on the way the different structures of the $(varphi,N)$-module interact with each other.
In a joint work with Xavier Caruso, we try to provide an algorithm which takes a filtered $(varphi,N)$-module as an input and outputs whether it is admissible or not. I will explain how we can implement filtered $(varphi,N)$-modules on a computer and why this question is well posed. I will then present an algorithm which answers the question if the $(varphi,N)$-module is nice enough and explain the difficulties we are facing both in this nice case and in the general case.
Pour une variété abélienne A sur un corps de nombres K, on sait que pour toute extension finie L/K, le nombre c(L) de points de torsion de A(L) est fini par le théorème de Mordell-Weil.
En fait, un résultat de Masser prédit que c(L) est polynomial en [L:K] (si on fixe A et K) avec un exposant g=dim A, et une conjecture de Hindry et Ratazzi de 2012 donne l’exposant optimal (plus petit que g en général) en fonction d’une certaine structure de la variété abélienne (liée à son groupe dit de Mumford-Tate)
Dans cet exposé, je parlerai d’un travail commun avec Lombardo et Zywina dans lequel nous démontrons une forme inconditionnelle de cette conjecture (et cette conjecture en admettant la conjecture de Mumford-Tate), en insistant sur les résultats intermédiaires qui peuvent être d’intérêt indépendant pour la compréhension des représentations galoisiennes associées à des variétés abéliennes.
In several applications one is interested in a fast computation of the codomain curve of a long chain of cyclic N-isogenies emanating from an elliptic curve E over a finite field Fq, where N = 2, 3, … is some small fixed integer coprime to q. The standard approach proceeds by finding a generator of the kernel of the first N-isogeny, computing its codomain via Vélu’s formulas, then finding a generator of the kernel of the second N-isogeny, and so on. Finding these kernel generators is often the main bottleneck.
In this talk I will explain a new approach to this problem, which was studied in joint work with Thomas Decru, Marc Houben and Frederik Vercauteren. We argue that Vélu’s formulas can be augmented with explicit formulas for the coordinates of a generator of the kernel of an N-isogeny cyclically extending the previous isogeny. These formulas involve the extraction of an N-th root, therefore we call them “radical isogeny formulas”. By varying which N-th root was chosen (i.e., by scaling the radical with different N-th roots of unity) one obtains the kernels of all possible such extensions. Asymptotically, in our main cases of interest this gives a speed-up by a factor 6 or 7 over previous methods.
I will explain the existence of radical isogeny formulas, discuss methods to find them (the formulas become increasingly complicated for larger N), and pose some open questions.
The computation of unit and class groups in arbitrary degree number field is done in polynomial time in a simmilar fashion to the Shor’s factoring algorithm. Contrary to the fixed degree case which was solved in 2001 by Hallgren and a follow-up paper of Schmidt and Vollmer (2005), the arbitrary degree case requires errors estimations and is solved by the conjunction of two papers, Eisenträger et al. (2014) and de Boer et al. (2020).
In the particular case of cyclotomic fields we propose a version of the algorithm which makes use of cyclotomic units. Indeed, the Shor-like procedure of Eisenträger et al.’s algorithm produces random approximations of vectors in the dual of the lattice of units. In order to guarantee the correction of the algorithm, they have to do the computations in high precision and hence require a large amount of qubits. Thanks to the lattice of cyclotomic units, one can do the computations in smaller precision and reduce the number of qubits.
We will discuss an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code.
Computing roots of elements is an important step when solving various tasks in computational number theory. It arises for example during the final step of the General Number Field Sieve~(Lenstra et al. 1993). This problem also intervenes during saturation processes while computing the class group or $S$-units of a number field (Biasse and Fieker).
It is known from the seminal paper introducing the LLL algorithm that one can recover elements of a given number field $K$ given approximations of one of their complex embeddings. This can be used to compute roots of polynomials. In the first part of this presentation, I will describe an extension of this approach that take advantage of a potential subfield $k$, which replace the decoding of one element of $K$ by the decoding $[K:k]$ elements of $k$, to the cost of search in a set of cardinality $d^{[K:k]}$ where $d$ is the degree of the targetted polynomial equation. We will also describe heuristic observations that are useful to speed-up computations.
In the second part of the presentation, we will describe methods to compute $e$-th roots specifically. When $K$ and $e$ are such that there are infinitely many prime integers $p$ such that $orall mathfrak{p} mid p, p^{f(mathfrak{p}mid p)}otequiv1mod e$, we reconstruct $x$ from $ x pmod {p_1}, dots, x pmod {p_r} $ using a generalisation of Thomé’s work on square-roots in the context of the NFS~(Thomé). When this good condition on $K$ and $n$ is not satisfied, one can adapt Couveignes’ approach for square roots~(Couveignes) to relative extensions of number fields $K/k$ provided $[K:k]$ is coprime to $e$ and infinitely many prime integers $p$ are such that each prime ideal $mathfrak{p}$ of $mathcal{O}_k$ above $p$ is inert in $K$.
The so-called $omega$-encoding, introduced by Goudarzi, Joux and Rivain (Asiacrypt 2018), generalizes the commonly used arithmetic encoding. By using the additionnal structure of this encoding, they proposed a masked multiplication gadget (GJR) with quasilinear (randomness and operations) complexity. A second contribution by Goudarzi, Prest, Rivain and Vergnaud in this line of research appeared in TCHES 2021. The authors revisited the aforementioned multiplication gadget (GPRV), and brought the IOS security notion for refresh gadgets to allow secure composition between probing secure gadgets.
In this paper, we propose a follow up on GPRV. Our contribution stems from a single Lemma, linking algebra and probing security for a wide class of circuits, further exploiting the algebraic structure of $omega$-encoding. On the theoretical side, we weaken the IOS notion into the KIOS notion, and we weaken the usual $t$-probing security into the RTIK security. The composition Theorem that we obtain by plugging together KIOS, RTIK still achieves region-probing security for composition of circuits.
To substantiate our weaker definitions, we also provide examples of competitively efficient gadgets verifying our weaker security notions. Explicitly, we give 1) a refresh gadget that uses $d-1$ random field elements to refresh a length $d$ encoding that is KIOS but not IOS, and 2) multiplication gadgets asymptotically subquadratic in both randomness and complexity. While our algorithms outperform the ISW masked compiler asymptotically, their security proofs require a bounded number of shares for a fixed base field.
Two number fields are said to be arithmetically equivalent if they have the same Dedekind zeta function. The central question about arithmetic equivalence is to determine how “similar” arithmetically equivalent number fields are. That is, we would like to determine which arithmetic invariants, such as the degree, discriminant, signature, units, class number, etc., are the same, and which ones can differ. A key result about arithmetic equivalence is Gassmann’s theorem, which allows one to answer such questions using Galois theory and representation theory.
I will give a general introduction to arithmetic equivalence, discussing some of the main results such as Gassmann’s theorem and giving examples. I will then introduce the successive minima of a number field, and show that arithmetically equivalent number fields have approximately the same successive minima.
As E varies among elliptic curves defined over the rational numbers, a theorem of Bhargava and Shankar shows that the average rank of the Mordell–Weil group E(Q) is bounded. If we now fix a Galois number field K, how does the Mordell–Weil group E(K) behave on average as a Galois module? We will report on progress on this question, which is obtained by instead studying the associated p-Selmer groups of E/K as Galois modules.
We construct some novel Selmer groups which describe certain invariants of these modules, and go on to study the behaviour of these new Selmer groups. This in turn allows us to give bounds for certain behaviour for the Mordell–Weil groups.
We recently defined an OSIDH protocol with Kohel (OSIDH)— for oriented supersingular isogeny Diffie-Hellman — by imposing the data of an orientation by an imaginary quadratic ring $mathcal{O}$ on the category of supersingular elliptic curves. Starting with an elliptic curve $E_0$ oriented by a CM order $mathcal{O}_K$ of class number one, we push forward the class group action along an $ell$-isogeny chains, on which the class group of an order $mathcal{O}$ of large index $ell^n$ in $mathcal{O}_K$ acts. The map from $ell$-isogeny chains to its terminus forgets the structure of the orientation, and the original base curve $E_0$. For a sufficiently long random $ell$-isogeny chain, the terminal curve represents a generic supersingular elliptic curve.
One of the advantages of working in this general framework is that the group action by $mathrm{Cl}(mathcal{O})$ can be carried out effectively solely on the sequence of moduli points (such as $j$-invariants) on a modular curve, thereby avoiding expensive generic isogeny computations or the requirement of rational torsion points.
The proposed attacks of Onuki (2021) and Dartois-De Feo (2021) and their analyses motivate the idea of enlarging the class group without touching the key space using extit{ clouds}. In this talk we propose two approaches to augments $mathrm{Cl}(mathcal{O}_n(M))$ in a way that no effective data is transmitted for a third party to compute cycle relations. In both cases, it comes down to an extension of the initial chain by the two parties separately. In particular, while the original OSIDH protocol made exclusive use of the class group action at split primes in $mathcal{O}$, we extend the protocol to include descent in the eddies at non-split primes (inert or ramified) or at large primes which are not cost-effective for use for longer isogeny walks.
We have developed open-source software in C++ for computing with polyhedra, lattices, and related algebraic structures. We will shortly explain its design. Then we will explain how it was used for computing the dual structure of the $W(H_4)$ polytope.
Then we will consider another application to finding the fundamental domain of cocompact subgroups $G$ of $mathrm{SL}_n(mathbb{R})$. The approach defines a cone associated with the group and a point $xin mathbb{R}^n$. It is a generalization of Venkov reduction theory for $mathrm{GL}_n(mathbb{Z})$. We recall the Poincaré Polyhedron Theorem which underlies these constructions.
We give an iterative algorithm that allows computing a fundamental domain. The algorithm is based on linear programming, the Shortest Group Element (SGE) problem and combinatorics. We apply it to the Witte cocompact subgroup of $mathrm{SL}_3(mathbb{R})$ defined by Witte for the cubic ring of discriminant $49$.
The lattice isomorphism problem (LIP) asks one to find an isometry between two lattices. It has recently been proposed as a foundation for cryptography in independent works. This problem is the lattice variant of the code equivalence problem, on which the notion of the hull of a code can lead to devastating attacks. In this talk I will present the cryptanalytic role of an adaptation of the hull to the lattice setting, which we call the s-hull. Specifically, we show that the hull can be helpful for geometric attacks, for certain lattices the minimal distance of the hull is relatively smaller than that of the original lattice, and this can be exploited. The attack cost remains exponential, but the constant in the exponent is halved.
Our results suggests that one should be very considerate about the geometry of hulls when instantiating LIP for cryptography. They also point to unimodular lattices as attractive options, as they are equal to their own hulls. Remarkably, this is already the case in proposed instantiations, namely the trivial lattice $mathbb{Z}^n$ and the Barnes-Wall lattices.
For each finite subgroup $G$ of $mathrm{PGL}_2(mathbb{Q})$, and for each integer $n$ coprime to $6$, we construct explicitly infinitely many Galois extensions of $mathbb{Q}$ with group $G$ and whose ideal class group has $n$-rank at least $#G-1$. This gives new $n$-rank records for class groups of number fields.
Supersingular isogeny graphs are very complicated and intricate, and are used extensively by cryptographers. On the other side of things, the structure of ordinary isogeny graphs is well understood connected components look like volcanoes. Throughout this talk we will explore the ordinary $ell$-isogeny graph over $mathbb{F}_p$ for various prime numbers $ell$ and $p$, and answer the following question, given a volcano-shaped graph, can we always find an isogeny graph in which our volcano lives as a connected component?
The classical Deuring correspondence provides a roadmap between supersingular elliptic curves and the maximal orders which are isomorphic to their endomorphism rings. Building on this idea, we add the information of a cyclic subgroup of prime order N to supersingular elliptic curves, and prove a generalisation of the Deuring correspondence for these objects. We also study the resulting ell-isogeny graphs supersingular elliptic curve with level-N structure, and the corresponding graphs in the realm of quaternion algebras. The structure of the supersingular elliptic curve ell-isogeny graph underlies the security of a new zero-knowledge proof of isogeny knowledge [Basso-Codogni-Connolly-De Feo-Fouotsa-Lido-Morrison-Panny-Patranabis-Wesolowski 2022].
Afficher 2016 - 2015 - antérieurs