IMB > Recherche > Séminaires
# Séminaire Images Optimisation et Probabilités

Le 2 décembre 2021
à 11:00
*Salle de Conférences*
Titouan Vayer (ENS Lyon)
**Less is more ? How Optimal Transport can help for compressive learning**
Nowadays large-scale machine learning faces a number of fundamental computational challenges, triggered by the high dimensionality of modern data and the increasing availability of very large training collections. These data can also be of a very complex nature, such as those described by the graphs that are integral to many application areas. In this talk I will present some solutions to these problems. I will introduce the Compressive Statistical Learning (CSL) theory, a general framework for resource-efficient large scale learning in which the training data is summarized in a small single vector (called sketch) that captures the information relevant to the learning task. We will show how Optimal Transport (OT) can help us establish statistical guarantees for this type of learning problem. I will also show how OT can allow us to obtain efficient representations of structured data, thanks to the Gromov-Wasserstein distance. I will address concrete learning tasks on graphs such as online graph subspace estimation and tracking, graphs partitioning, clustering and completion.

Le 16 décembre 2021
à 11:00
*Salle de Conférences*
Jérôme Stenger (Université Toulouse 3)
**Optimal Uncertainty Quantification of a Risk Measurement**
Uncertainty quantification in a safety analysis study can be conducted by considering the uncertain inputs of a physical system as a vector of random variables. The most widespread approach consists in running a computer model reproducing the physical phenomenon with different combinations of inputs in accordance with their probability distribution. Then, one can study the related uncertainty on the output or estimate a specific quantity of interest (QoI). Because the computer model is assumed to be a deterministic black-box function, the QoI only depends on the choice of the input probability measure. It is formally represented as a scalar function defined on a measure space. We propose to gain robustness on the quantification of this QoI. Indeed, the probability distributions characterizing the uncertain input may themselves be uncertain. For instance, contradictory expert opinion may make it difficult to select a single probability distribution, and the lack of information in the input variables inevitably affects the choice of the distribution. As the uncertainty on the input distributions propagates to the QoI, an important consequence is that different choices of input distributions will lead to different values of the QoI. The purpose of this work is to account for this second level uncertainty. We propose to evaluate the maximum of the QoI over a space of probability measures, in an approach known as optimal uncertainty quantification (OUQ). Therefore, we do not specify a single precise input distribution, but rather a set of admissible probability measures defined through moment constraints. In the case where the QoI is a quasi-convex function, it is then optimized over this measure space. After exposing theoretical results showing that the optimization domain of the QoI can be reduced to the extreme points of the measure space, we present several interesting quantities of interest satisfying the assumption of the problem.

Le 6 janvier 2022
à 11:00
*Salle 285*
Stéphane Dartois (à confirmer)
**TBA**
TBA

Le 13 janvier 2022
à 11:00
*Salle de Conférences*
Slim Kammoun
**TBA**
TBA

Responsable : Camille Male