N’afficher que les événements de cette semaine
Exceptionnellement, l'accueil de la Cellule Informatique au bureau 225
en raison de la participation d'une partie de l'équipe informatique à la semaine de travail de l'équipe de la PLM (PLM team) au CIRM à Marseille.
Recently [Wenger et al. IEEE S\&P 2025] claimed that the `Cool and Cruel' (C+C) approach to solving LWE with sparse secrets [Nolte et al. AFRICACRYPT 2024] outperforms other approaches, in particular the well established primal attack.
In this talk I’ll show that 1.. C+C is an instantiation of a known dual attack [Albrecht, EUROCRYPT 2017], 2.. experimental evidence that the primal attack can outperform C+C in similar regimes to those studied by Wenger et al.
We shall describe a parametrix of the evolution semigroup associated to the Fokker-Planck equation by mean of a Fourier integral operator with complex phase. Although FIO with complex phase might look frightening to non experts, our approach is very elementary and requires no background in microlocal analysis. This talk will be based on ongoing discussions with Paul Alphonse, Matthieu Leautaud and Xue-Ping Wang.
Oversmoothing has long been identified as a major limitation of Graph Neural Networks (GNNs): input node features are smoothed at each layer and converge to a non-informative representation, if the weights of the GNN are sufficiently bounded. This assumption is crucial: if, on the contrary, the weights are sufficiently large, then oversmoothing may not happen. Theoretically, GNN could thus learn to not oversmooth. However it does not really happen in practice, which prompts us to examine oversmoothing from an optimization point of view. In this paper, we analyze backward oversmoothing, that is, the notion that backpropagated errors used to compute gradients are also subject to oversmoothing from output to input. With non-linear activation functions, we outline the key role of the interaction between forward and backward smoothing.
Moreover, we show that, due to backward oversmoothing, GNNs provably exhibit many spurious stationary points: as soon as the last layer is trained, the whole GNN is at a stationary point. As a result, we can exhibit regions where gradients are near-zero while the loss stays high.
The proof relies on the fact that, unlike forward oversmoothing, backward errors are subjected to a linear oversmoothing even in the presence of non-linear activation function, such that the average of the output error plays a key role. Additionally, we show that this
phenomenon is specific to deep GNNs, and exhibit counter-example Multi-Layer Perceptron. This paper is a step toward a more complete comprehension of the optimization landscape specific to GNNs.
Networks?
We consider the magnetic Dirac operator on a curved strip whose boundary carries the infinite mass boundary condition. When the magnetic field is large, we provide the reader with accurate estimates of the essential and discrete spectra. In particular, we give sufficient conditions ensuring that the discrete spectrum is non-empty. This is a joint work with Julien Royer from Toulouse III University and Nicolas Raymond from Angers University.
...
Bilevel optimization is a rather young but very active sub-field of mathematical optimization. The main reason is that bilevel optimization problems can serve as a powerful tool for modeling hierarchical decision-making processes, which arise in various real-world applications such as in critical infrastructure defense, transportation, or energy. However, the nested structure of bilevel problems makes them intrinsically hard to solve—even if all parameters of the problem are exactly known. Further challenges arise if problems under uncertainty are considered.
In this talk, we begin with a brief introduction to the key concepts of bilevel optimization, covering the structure, characteristics, and commonly used reformulations of these problems. We then explore how techniques from robust optimization can be used to tackle bilevel problems under uncertainty. In particular, we highlight that the sources of uncertainty in bilevel optimization are much richer compared to classic, i.e., single-level, problems since not only the problem data but also the (observation of the) decisions of the two players can be subject to uncertainty.
Finally, we discuss recent algorithmic advances for solving mixed-integer linear bilevel problems with a binary lower level and a Gamma-robust treatment of lower-level objective uncertainty. To solve these Gamma-robust bilevel problems, we present an exact and problem-tailored branch-and-cut approach as well as a heuristic framework. The latter relies on solving a linear number of deterministic bilevel problems so that no problem-specific tailoring is required.
A définir
Pour appliquer les méthodes à noyaux, nous avons besoin de noyaux faciles à calculer et suffisamment riches. Comment peut-on concevoir de "bons" noyaux sur les espaces non-euclidiens, notamment sur les espaces symétriques qui sont très récurrents dans les applications ? Nous proposons un nouveau résultat, le "théorème Lp de Godement" comme outil principal pour répondre à cette question. Nous étudions en particulier le cas des espaces symétriques qui sont des "cônes" (cônes de matrices de covariance), où la réponse trouve une forme bien concrète, avec applications à l'appui. Finalement, si on ne peut pas trouver de noyaux définis positifs, que faire ? On montrera qu'il est possible de faire beaucoup de choses avec des noyaux qui sont différence de deux noyaux définis positifs : au lieu d'apprendre dans des RKHS, on peut apprendre dans des RKKS (reproducing Kernel Krein Space).
A définir