> >
Séminaire Images Optimisation et Probabilités
Responsable : Camille Male
Le 12 janvier 2023
à 11:00
Salle de Conférences
Hippolyte Labarriere null
Automatic FISTA restart
We propose a restart scheme for FISTA (Fast Iterative Shrinking-Threshold Algorithm). This method which is a generalization of Nesterov's accelerated gradient algorithm is widely used in the field of large convex optimization problems and it provides fast convergence results under a strong convexity assumption. These convergence rates can be extended for weaker hypotheses such as the Lojasiewicz property but it requires prior knowledge on the function of interest. In particular, most of the schemes providing a fast convergence for non-strongly convex functions satisfying a quadratic growth condition involve the growth parameter which is generally not known. Recent works show that restarting FISTA could ensure a fast convergence for this class of functions without requiring any knowledge on the growth parameter. We improve these restart schemes by providing a better asymptotical convergence rate and by requiring a lower computation cost. We present numerical results emphasizing the efficiency of this method.
Le 19 janvier 2023
à 11:00
Salle de Conférences
Camille Male (IMB) null
"Introduction aux Probabilités libres part III\n"
Le 26 janvier 2023
à 11:00
Salle 1
Vanessa Piccolo (ENS Lyon) null
Asymptotic spectral density of nonlinear random matrix model via cumulant expansion
In this talk we will study the asymptotic spectral density of a nonlinear random matrix model M=YY* with Y=f(WX), where W and X are random rectangular matrices with iid entries and f is a non-linear smooth function. We will derive a self-consistent equation for the Stieltjes transform of the limiting eigenvalue distribution using the resolvent approach via the cumulant expansion. This is based on a joint work with Dominik Schröder.
Le 9 février 2023
à 11:00
Salle de Conférences
Sofia Tarricone null
Sur les densités de Janossy du processus ponctuel déterminantal d'Airy ''aminci''
Dans cet exposé, nous montrerons que les densités de Janossy d'un processus ponctuel déterminantal d'Airy convenablement aminci sont décrites par les équations de Schrodinger et de KdV (cylindrique), liées aussi à certaines analogues intégro-différentielles d'équations de type Painlevé. Tout d'abord, nous reverrons les résultats connus pour la ''gap probability'' du même processus d'Airy aminci, coincidant avec le déterminant de Fredholm du noyau d'Airy à temperature finie, puis nous les utiliserons pour enfin caractériser les densités de Janossy aussi. Le séminaire se base sur un travail (presque terminé) avec des collaborateurs de l'Université Catholique de Louvain-la-Neuve : T. Claeys, G. Glesner, G. Ruzza.
Le 2 mars 2023
à 11:00
Salle de Conférences
Emmanuel Gobet (CMAP) null
Estimation of extreme quantiles with neural networks, application to extreme rainfalls
"We propose new parametrizations for neural networks in order to estimate extreme quantiles for both non-conditional and conditional heavy-tailed distributions. All proposed neural network estimators feature a bias correction based on an extension of the usual second-order condition to an arbitrary order. We establish convergence rates in terms of the neural network complexity. The finite sample performances of the non-conditional neural network estimator are compared to other bias-reduced extreme-value competitors on simulated data: our method outperforms them in difficult heavy-tailed situations where other estimators almost all fail. Finally, the conditional neural network estimators are implemented to investigate the behavior of extreme rainfalls as functions of their geographical location in the southern part of France."
Le 16 mars 2023
à 11:00
Salle de Conférences
Guy Gilboa (Technion) null
The Underlying Correlated Dynamics in Neural Training
"Training of neural networks is a computationally intensive task. The significance of understanding and modeling the training dynamics is growing as increasingly larger networks are being trained. We propose a model based on the correlation of the parameters' dynamics, which dramatically reduces the dimensionality. We refer to our algorithm as Correlation Mode Decomposition (CMD). It splits the parameter space into groups of parameters (modes) which behave in a highly correlated manner through the epochs. We achieve a remarkable dimensionality reduction with this approach, where a network of 11M parameters like ResNet-18 can be modeled well using just a few modes. We observe each typical time profile of a mode is spread throughout the network in all layers. Moreover, retraining the network using our dimensionality reduced model induces a regularization which yields better generalization capacity on the test set.This is a joint work with Rotem Turjeman, Tom Berkov and Ido Cohen."
Le 23 mars 2023
à 09:30
Salle 1
Journée thématique (organisée par Yann Traonmilin) null
Problèmes inverses en imagerie - régularisation, modèles de faible dimension et applications
"L'approche variationnelle de résolution des problèmes inverses en imagerie a connu beaucoup de développements lors des trente dernières années. Son cadre mathématique flexible a permis de montrer des résultats garantissant leurs succès sous des hypothèses sur les paramètres du modèle (parcimonie, nombre de mesures, nature du bruit, etc). De nombreuses questions restent ouvertes dans ce domaine, résolution de problèmes inverses dans des espaces de mesures (ex: super-résolution), régularisation adaptée à de nouveaux modèles de faibles dimension, garanties pour les méthodes de résolution basées sur l'apprentissage profond, etc. Nous proposons pendant cette journée d'aborder les dernières avancées aussi bien pratiques que théoriques dans ce domaine. $$\href{https://gdr-mia.math.cnrs.fr/events/journee_problemes_inverses2023/}{\text{Site de l'événement}}$$"
Le 30 mars 2023
à 11:00
Salle 1
Simon Vary null
Extensions of principal component analysis: limited data, sparse corruptions, and efficient computation
Principal component analysis (PCA) is a fundamental tool used for the analysis of datasets with widespread applications across machine learning, engineering, and imaging. The first part of the talk is dedicated to solving Robust PCA from subsampled measurements, which is the inverse problem posed over the set that is the additive combination of the low-rank and the sparse set. Here we develop guarantees using the restricted isometry property that show that rank-r plus sparsity-s matrices can be recovered by computationally tractable methods from p=O(r(m+n-r)+s)log(mn/s) linear measurements. The second part of the talk is focused on finding an efficient way to perform large-scale optimization constrained to the set of orthogonal matrices used in PCA and for training of neural networks. We propose the landing method, which does not enforce the orthogonality exactly in every iteration, instead, it controls the distance to the constraint using computationally inexpensive matrix-vector products and enforces the exact orthogonality only in the limit. We show the practical efficiency of the proposed methods on video separation, direct exoplanet detection, online PCA, and for robust training of neural networks.
Le 27 avril 2023
à 09:30
Salle de Conférences
Journées de probabilités et statistique en Nouvelle Aquitaine null
27 Avril de 9h30 à 17h30 + tapas à 19h30 ----- 28 Avril de 9h00 à 12h30 + repas à 12h30
"Trouvez toute l'info sur le lien: https://indico.math.cnrs.fr/event/8848/"
Le 4 mai 2023
à 11:00
Salle de Conférences
Guillaume Lauga ENS Lyon
Multilevel proximal methods for Image Restoration
"Solving large scale optimization problems is a challenging task and exploiting their structure can alleviate its computational cost. This idea is at the core of multilevel optimization methods. They leverage the definition of coarse approximations of the objective function to minimize it. In this talk, we present a multilevel proximal algorithm IML FISTA that draws ideas from the multilevel optimization setting for smooth optimization to tackle non-smooth optimization. In the proposed method we combine the classical accelerations techniques of inertial algorithm such as FISTA with the multilevel acceleration. IML FISTA is able to handle state-of-the-art regularization techniques such as total variation and non-local total-variation, while providing a relatively simple construction of coarse approximations. The convergence guarantees of this approach are equivalent to those of FISTA. Finally we demonstrate the effectiveness of the approach on color images reconstruction problems and on hyperspectral images reconstruction problems."
Le 11 mai 2023
à 11:00
Salle de Conférences
Florentin Coeurdoux Toulouse INP
Plug-and-Play Split Gibbs Sampler: embedding deep generative priors in Bayesian inference
Statistical inference problems arise in numerous machine learning and signal/image processing tasks. Bayesian inference provides a powerful framework for solving such problems, but posterior estimation can be computationally challenging. In this talk, we present a stochastic plug and play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution. The algorithm draws inspiration from the alternating direction method of multipliers (ADMM), and subdivides the challenging task of posterior sampling into two simpler sampling problems. The first problem is dependent on the forward model, while the second corresponds to a denoising problem that can be readily accomplished through a deep generative model. Specifically, we demonstrate our method using diffusion-based generative models. By sampling the parameter to infer and the hyperparameters of the problem efficiently, the generated samples can be used to approximate Bayesian estimators of the parameters. Unlike optimization methods, the proposed approach provides confidence intervals at a relatively low computational cost. To evaluate the effectiveness of our proposed samplers, we conduct simulations on four commonly studied signal processing problems and compare their performance to recent state-of-the-art optimization and MCMC algorithms.
Le 25 mai 2023
à 11:00
Salle 1
Nicolas Nadisic
Beyond separability in nonnegative matrix factorization
"Nonnegative matrix factorization (NMF) is a commonly used low-rank model for identifying latent features in nonnegative data. It became a standard tool in applications such as blind source separation, recommender systems, topic modeling, or hyperspectral unmixing. Essentially, NMF consists in finding a few meaningful features such that the data points can be approximated as linear combinations of those features. NMF is generally a difficult problem to solve, since it is both NP-hard and ill-posed (meaning there is no unique solution). However, under the separability assumption, it becomes tractable and well-posed. The separability assumption states that for every feature there is at least one pure data point, that is a data point composed solely of that feature. This is known as the 'pure-pixel' assumption in hyperspectral unmixing.In this presentation I will first provide an overview of separable NMF, that is the family of NMF models and algorithms leveraging the separability assumption. I will then detail recent contributions, notably (i) an extension of this model with sparsity constraints that brings interesting identifiability results; and (ii) new algorithms using the fact that, when the separability assumption holds, then there are often more than one pure data point. I will illustrate the models and methods presented with applications in hyperspectral unmixing."
Le 22 juin 2023
à 11:00
Salle de conférence
Benjamin McKenna Harvard University
Universality for the global spectrum of random inner-product kernel matrices
In recent years, machine learning has motivated the study of what one might call "nonlinear random matrices." This broad term includes various random matrices whose construction involves the *entrywise* application of some deterministic nonlinear function, such as ReLU. We study one such model, an entrywise nonlinear function of a sample covariance matrix f(X*X), typically called a "random inner-product kernel matrix" in the literature. A priori, entrywise modifications of a matrix can affect the eigenvalues in complicated ways, but recent work of Lu and Yau established that the large-scale spectrum of such matrices actually behaves in a simple way, described by free probability, when the randomness in X is either uniform on the sphere or Gaussian. We show that this description is universal, holding for much more general randomness in X. Joint work with Sofiia Dubova, Yue M. Lu, and Horng-Tzer Yau.
Le 28 septembre 2023
à 11:00
Salle de Conférences
Julio Backhoff University of Vienna
À préciser
À préciser.
Le 5 octobre 2023
à 11:00
Salle de Séminaire.
Hui Shi Bordeaux
Soutenance de thèse
Les séminaires à partir de 2016