IMB > Recherche > Séminaires

Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique

Responsables : Ayse Nur Arslan et Frédéric Barraquand.

  • Le 4 février 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Michael Römer
    Future Demand Uncertainty in Personnel Scheduling: Investigating Deterministic Lookahead Policies using Optimization and Simulation
    One of the main characteristics of personnel scheduling problems is the multitude of rules governing schedule feasibility and quality. This talk deals with an issue in personnel scheduling which is both relevant in practice and often neglected in academic research: When evaluating a schedule for a given planning period, the scheduling history preceding this period has to be taken into account. On the one hand, the history restricts the space of possible schedules, in particular at the beginning of the planning period and with respect to rules with a scope transcending the planning period. On the other hand, the schedule for the planning period under consideration affects the solution space of future planning periods. In particular if the demand in future planning periods is subject to uncertainty, an interesting question is how to account for these effects when optimizing the schedule for a given planning period. The resulting planning problem can be considered as a multistage stochastic optimization problem which can be tackeled by different modeling and solution approaches. In this paper, we compare different deterministic lookahead policies in which a one-week scheduling period is artificially extended by an artifical lookahead period. In particular, we vary both the length and the way of creating demand forecasts for this lookahead period. The evaluation is carried out using a stochastic simulation in which weekly demands are sampled and the scheduling problems are solved exactly using mixed integer linear programming techniques. Our computational experiments based on data sets from the Second International Nurse Rostering Competition show that the length of the lookahead period is crucial to find good-quality solutions in the considered setting.
  • Le 5 février 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Agnès Leroux (Post-doc\, Optimal) : Strategic planning of phytosanitary treatments in Wineries.\nJérémy Guillot (Doctorant\, Optimal) : Aggregation technique applied to a clustering problem for waste collection.
    Sans titre

  • Le 1er avril 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Robin Genuer ISPED\, INRIA
    Random Forests for Big Data
    Big Data is one of the major challenges of statistical science and has numerous consequences from algorithmic and theoretical viewpoints. Big Data always involve massive data but they also often include data streams and data heterogeneity. Recently some statistical methods have been adapted to process Big Data, like linear regression models, clustering methods and bootstrapping schemes. Based on decision trees combined with aggregation and bootstrap ideas, random forests were introduced by Breiman in 2001. They are a powerful nonparametric statistical method allowing to consider in a single and versatile framework regression problems, as well as two-class and multi-class classification problems. Focusing on classification problems, the aim of this talk is to review available proposals about random forests in parallel environments and about online random forests. Then, we formulate various remarks for random forests in the Big Data context. Finally, we experiment three variants involving subsampling, Big Data-bootstrap and MapReduce respectively, on two massive datasets (15 and 120 millions of observations), a simulated one and real world data.
  • Le 8 avril 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Nathalie Villa-Vialaneix Inra Toulouse
    Sparse Functional SIR which is interpretable
    In the present work, we focus on the functional regression model in which a real random variable has to be predicted from functional predictors. We use the semiparametric framework of Sliced Inverse Regression (SIR). SIR is an effective method for dimension reduction of high-dimensional data which computes a linear projection of the predictors in a low-dimensional space, without loss on regression information. The present work addresses the issue of variable selection in functional SIR in order to improve the interpretability of the components. We extend the approaches of variable selection developped for multidimensional SIR to select intervals rather than separated evaluation points in the definition domain of functional predictors. SIR is formulated in different and equivalent ways which are alternatively used to produce ridge estimates of the components which are shrinked afterwards with a group-LASSO like penalty. An iterative procedure which automatically defines the relevant intervals is also introduced to provide a priori information to the sparse model. The approach is proved efficient on simulated data.
  • Le 17 juin 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Vincent Leclère
    DADP : a spatial decomposition algorithm for multistage stochastic optimization problem
    Multistage stochastic optimization problem are hard to solve. Indeed, the extensive formulation has a huge size. We claim that decomposition methods which allow to see this huge problem as a collection of coordinated smaller problems are an efficient way to address multistage stochastic optimization. Decomposition methods can be done in at least three ways : by scenarios (e.g. Progressive Hedging approaches), by time (e.g. dynamic programming or SDDP approaches) or spatially by disconnecting subpart of the system. The Dual Approximate Dynamic Programming (DADP) algorithm we present fall in the last category and is motivated by the study of the optimal management of an hydroelectric valley composed of N linked dams. Dualizing the couling constraint, and fixing a multiplier allow to solve the problem as N independant dams. Unfortunately in a stochastic setting this approach fails. The DADP algorithm rely on an approximation of the multiplier allowing to efficiently solve the subproblems by dynamic programming. We present theoretical results and interpretation of the DADP algorithm as well as numerical results.
  • Le 12 juillet 2016 à 16:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Marcus Poggi PUC-Rio
    Multi-Depot Fleet Dimensioning for Seasonal Stochastic Demand
    We address the problem of multiple depot fleet dimensioning for delivery of a seasonal stochastic demand along a given period. Static and dynamic rules for the allocation of demands to depots are discussed. Given a predicted demand behavior represented by a generation function over time and space, distances between clients and the depots, cost to lease one vehicle for the period, unity cost for traveling a distance for leased and for short time hired vehicles and a demand allocation rule, find the (possibly heterogeneous) fleet size at each depot that minimizes the expected operation cost for the period. We propose a decomposition approach based on Stochastic Dual Dynamic Programming that allows dealing with the integrality of the subproblems. The resulting solution is then evaluated through simulation of the operation along the period via a Monte-Carlo method. The sensitivity to the quality of demand prediction and to the demand allocation rule is analyzed. Finally, we discuss implementation issues and the limits of the proposed method.
  • Le 23 septembre 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Frédéric Gardi - Innovation 24 & LocalSolver Groupe Bouygues
    Solving routing and scheduling problems using LocalSolver
    In this talk, we introduce LocalSolver, a heuristic solver for large-scale optimization problems. It provides good solutions in short running times for problems described in their mathematical form without any particular structure. Models supported by LocalSolver involve linear and nonlinear objectives and constraints including algebraic and logical expressions, in continuous and discrete variables. LocalSolver starts from a possibly infeasible solution and iteratively improves it by exploring some neighborhoods. A differentiator with classical solvers is the integration of small-neighborhood moves whose incremental evaluation is fast, allowing exploring millions of feasible solutions in minutes on some problems. We will present the set-based modeling formalism recently introduced in LocalSolver. Offering set decisions (lists of integers) and operators (count, at, indexOf, contains, disjoint, partition), this mathematical formalism allows to model routing and scheduling problems naturally and compactly, as well as to solve them more efficiently than the traditional Boolean modeling approach related to mixed-integer linear programming. We will show application examples on basic combinatorial problems (traveling salesman, vehicle routing, flowshop scheduling) together with performance benchmarks.
  • Le 7 octobre 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Benoit Liquet\, université de Pau et des Pays de l'Adour
    Bayesian Variable Selection Regression Of Multivariate Responses For Group Data..
    We propose two multivariate extensions of the Bayesian group lasso for variable selection and estimation for data with high dimensional predictors and multidimensional response variables. The methods utilise spike and slab priors to yield solutions which are sparse at either a group level or both a group and individual feature level. The incorporation of group structure in a predictor matrix is a key factor in obtaining better estimators and identifying associations between multiple responses and predictors. The approach is suited to many biological studies where the response is multivariate and each predictor is embedded in some biological grouping structure such as gene pathways or brain regions. Our Bayesian models are connected with penalized regression, and we prove both oracle and asymptotic distribution properties under an orthogonal design. We derive ecient Gibbs sampling algorithms for our models and provide the implementation in a comprehensive R package called MBSGS available on the CRAN. The performance of the proposed approaches is compared to state-of-the-art variable selection stratégies on simulation data set.
  • Le 21 octobre 2016 à 10:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Jacek Gondzio
    Stable Column Generation with the Primal-Dual Interior Point Method
    Advantages of interior point methods (IPMs) applied in the context of column generation will be discussed. Some of the many false views of the combinatorial optimization community on interior point methods will be addressed and corrected. The talk will gently introduce some of the relevant mathematical optimization developments and will also briefly mention our software called PDCGM (Primal-Dual Column Generation Method) available for research use: http://www.maths.ed.ac.uk/~gondzio/software/pdcgm.html
  • Le 21 octobre 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Sarah Kaakai\, Université Pierre et Marie Curie
    Sans titre

  • Le 4 novembre 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Rodolphe Griset PhD student\, Equipe Inria RealOpt and EDF
    Sans titre

  • Le 25 novembre 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Gautier Stauffer
    The Stochastic Shortest Path Problem: A polyhedral combinatorics perspective
    The Stochastic Shortest Path problem (SSP) is a natural extension of the deterministic shortest path problem whereby traversing an `arc' may now lead to several destinations with different probabilities. In this setting, vertices are called states and arcs are called actions. The goal is to decide in each time step and each state which action to take so as to converge to a predefined target with probability one over an infinite time horizon. Taking an action has a cost and we wish to find a policy that minimizes the average cost over all possible realizations. SSP forms an important class of Markov Decision Processes (MDP) and it is extensively used in practice~: it arises naturally in robot motion planning, from maneuvering a vehicle over unfamiliar terrain, steering a flexible needle through human tissue or guiding a swimming micro-robot through turbulent water for instance ; and it has also many applications in operations research, artificial intelligence and economics, from inventory control, reinforcement learning to asset pricing. The SSP was studied thoroughly by Bertsekas and Tsitsiklis (1991) and later by Bertsekas and Yu (2016) and it is well understood when there is no nonpositive cost `transition cycles'. In particular, it is known that standard methods like Value Iteration and Policy Iteration converge in this case. In this talk we give a fresh look at the problem from a polyhedral combinatorics perspective. We study the natural linear programming relaxation of the problem and we show that actually standard methods also converge when there is no negative cost transition cycles. This closes the gap with the deterministic shortest path problem. Finally we show that we can also extend Dijkstra's algorithm to the stochastic setting.
  • Le 2 décembre 2016 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Eduardo Uchoa
    Primal-Dual Algorithms for the Constrained Two-Dimensional Guillotine Cutting Problem
    This work addresses the Constrained Two-dimensional Guillotine Cutting Problem, with and without item rotation, and presents a primal-dual algorithm with the following original components: (1) An improvement of the primal method from [Velasco and Uchoa 2014], based on Reactive GRASP; (2) Algorithm X, based on integer programming, capable of obtaining the best possible upper bounds with the dynamic programming state space relaxation [Christofides and Hadjiconstantinou 1995]; (3) Algorithm X2, a generalization of X that uses two-dimensional weights to obtain even stronger upper bounds; (4) The X2H algorithm, an adaptation of X2 to transform it into a primal heuristic. A method with those elements was extensively tested and could consistently find high quality solutions, certificated to be optimal in many cases. Joint work with André Velasco
  • Le 20 janvier 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Romain Azaïs\, Inria Nancy - Grand Est
    Inférence pour les arbres de Galton-Watson conditionnés via leur processus de Harris
    Les données hiérarchiques décrivent de nombreuses situations, de la structure d'une plante à celle des fichiers XML en passant par la phylogénie. Il est donc crucial de proposer des méthodes statistiques adaptées et robustes aux hypothèses de modélisation. Dans cet exposé, j'introduirai une approche alternative aux techniques d'analyse plus classiques basées sur des calculs de distances d'édition. La méthodologie se fonde sur le comportement asymptotique de certains processus de codage des arbres ordonnés dans le cadre probabiliste des arbres de Galton-Watson conditionnés par leur taille. Je montrerai comment on peut exploiter des théorèmes limites de la littérature en probabilités pour définir des estimateurs consistants du paramètre d'intérêt du modèle. Des simulations permettront d'illustrer les résultats de convergence établis, alors que l'application à des données réelles (historique de la structure d'articles de l'encyclopédie en ligne Wikipedia) montrera l'intérêt applicatif et la robustesse de la méthode. Il s'agit d'un travail commun avec Alexandre Genadot et Benoît Henry.
  • Le 10 février 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Vincent Brault\, Université Grenoble Alpes
    Segmentation fondée sur des statistiques de rang des lignes et des colonnes d'une matrice
    Dans la recherche sur les véhicules autonomes, nous sommes souvent amenés à étudier la similarité entre des images de l'environnement prises à différents moments (Birem et al., 2014). Les données résumées issues de séquences vidéo réelles (Korrapati et al., 2013) se présentent sous forme de matrices dans lesquelles des lieux différenciés (e.g. ligne droite, intersection...) correspondent à des blocs relativement homogènes. Le but est de proposer une méthode automatique pour estimer les frontières de ces blocs. Pour répondre à cette question, il existe des algorithmes développés pour l'analyse des données Hi-C issue de la biologie (Dixon et al., 2012) dont la problématique est similaire. En particulier, Brault et al. (2016) proposent une segmentation fondée sur des statistiques de rang que nous proposons d'utiliser. Dans cet exposé, nous commencerons par motiver la segmentation dans le cas des voitures autonomes puis nous introduirons le modèle et la méthode utilisée. Dans un second temps, nous présenterons les derniers résultats obtenus et nous conclurons par des comparaisons sur des données simulées et des applications sur des données réelles. Travail en collaboration avec Céline Lévy-Leduc, Sarah Ouadah et Laure Sansonnet pour la théorie et avec Jean-Charles Quinton pour l'application.
  • Le 17 mars 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Maxime Février\, Université Paris-Sud.
    Convergence des valeurs propres isolées de grandes matrices  ..aléatoires déformées...
    Dans cet exposé, on mettra en évidence, au travers d'un cas particulier, le phénomène général "spikes/outliers" de convergence des valeurs propres aléatoires isolées (les "outliers") d'une grande matrice aléatoire déformée par une perturbation déterministe lorsque cette dernière admet des valeurs propres déterministes isolées (les "spikes"). Le cas particulier qu'on considérera est le modèle additif A+UBU*, où A et B sont Hermitiennes déterministes, et U est distribuée suivant la mesure de Haar sur le groupe unitaire. On verra que la théorie des probabilités libres permet d'éclairer ce phénomène.   Travail en collaboration avec S. Belinschi, H. Bercovici et M.   Capitaine.
  • Le 24 mars 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Ruslan Sadykov
    A Branch-Cut-and-Price algorithm for heterogeneous vehicle routing and earliness-tardiness scheduling on unrelated machines
    A venir
  • Le 5 avril 2017 à 14:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Sophie Michel
    "Relax/Restrict-and-Fix": application à la planification des déplacements de conteneurs dans une plate-forme portuaire
    Des instances de taille réelle d'un problème de gestion des flux dans un port maritime multi-terminal, multi-modal ont pu être résolues grâce à une heuristique appelée ``Restrict-and-Fix''. Cette heuristique est basée sur une décomposition par bloc du problème et peut être vu comme une généralisation de l'heuristique Relax-and-Fix. Dans cet exposé, nous présenterons ces deux heuristiques, le problème de planification des déplacements de conteneurs dans une plate-forme portuaire et les résultats de ces heuristiques sur ce problème.
  • Le 7 avril 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Andrew Miller
    The UPS Orion Project: On Road Integrated Optimization and Navigation
    UPS, the leading logistics provider in the world, and long known for the efficiency of its operations, began a journey to modernize and streamline even further its pickup and delivery operations in 2003. The result is the “On Road Integrated Optimization and Navigation” (ORION) system. Every day, ORION provides an optimized route for each of UPS' 55,000 U.S. drivers based on the packages to be picked up and delivered on that day. Costing more than $295 million to build and deploy, ORION is expected to save UPS $300–$400 million annually. ORION is also contributing to the sustainability efforts of UPS by reducing its CO2 emissions by 100,000 tons annually. To bring this system from concept to reality, UPS instituted extensive changes in management practices to ensure that both users and executives would accept the system. By providing a foundation for a new generation of advanced planning systems, ORION is transforming operations at UPS. This talk will discuss some of the challenges of designing, implementing, and deploying a vehicle routing system on such a large scale in the real world. It will also provide brief discussion of some of the technical issues and challenges in formulating and solving the underlying vehicle routing problems, which are solved with a proprietary metaheuristic optimization system. These challenges include the need to respect numerous time windows of different kinds, the need to account for consistency (i.e., routes cannot change too much from day to day), and the need to account for preferences of planners and drivers.
  • Le 5 mai 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Madalina Olteanu\, SAMM EA 4543\, Pantheon-Sorbonne University\, Paris\, France
    Using SOMbrero for clustering and visualizing complex data..
    Over the years, the self-organizing map (SOM) algorithm was proven to be a powerful and convenient tool for clustering and visualizing data. While the original algorithm had been initially designed for numerical vectors, the available data in the applications became more and more complex, being frequently too rich to be described by a fixed set of numerical attributes only. This is the case, for example, when the data are described by relations between objects (individuals involved in a social network) or by measures of resemblance/dissemblance. This presentation will illustrate how the SOM algorithm can be used to cluster and visualize complex data such as graphs, categorical time series or panel data. In particular, it will focus on the use of the R package SOMbrero, which implements an online version of the relational self-organizing map, able to process any dissimilarity data. The package offers many graphical outputs and diagnostic tools, and comes with a user-friendly web graphical interface based on R-Shiny. Several examples on various real-world datasets will be given for highlighting the functionalities of the package. Joint work with Nathalie Villa-Vialaneix, MIAT, INRA, Toulouse, France.
  • Le 12 mai 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Dehia Ait-Ferhat Mentor Graphics Corporation and G-Scop
    Mathematical programming approaches for the manufacturing of vias using DSA technology
    Minimizing the cost of manufacturing of integrated circuits is an important problem in the semiconductor industry. In our study, we focus on the manufacturing of some specific components called vias which are used as electrical connections between components (transistors). The corresponding manufacturing problem can be modelled as a generalization of the standard coloring problem. We will present two different mathematical programming approaches to this problem, a mixed integer programming approach and a dynamic programming approach, and we will report some preliminary results.
  • Le 18 mai 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Laurence MAILLARD-TEYSSIER RTE
    La modélisation mathématique au service du transport de l'électricité (Séminaire commun avec l'équipe IOP)
    Dans le contexte de la transition énergétique, avec l'intégration massive des énergies renouvelables, la modulation croissante de la consommation électrique et les scénarios futurs de mix énergétiques, RTE (le gestionnaire du Réseau de Transport d'Électricité) doit continuer d'acheminer l'électricité en tout point du territoire, en équilibrant instantanément production et consommation et en garantissant la sûreté du système. La R&DI développe des outils et méthodes visant à optimiser les performances techniques et économiques du système électrique français et européen : gestion des risques, gestion des actifs et exploitation du système, gestion de l'équilibre offre/demande, construction de l'architecture du marché, insertion de nouveaux composants dans le réseau... Nous présenterons quelques-unes des problématiques qui font appel à la modélisation mathématique, par exemple pour expliquer ou prévoir (les flux, la consommation, les échanges, les productions à base d'énergies renouvelables…) à toutes échelles de temps et d'espace, ou pour observer le réseau et suivre le vieillissement des équipements, avec l'analyse des données issues de la numérisation des équipements ou des images prises par des drones…
  • Le 30 juin 2017 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Ruslan Sadykov\, équipe Inria RealOpt\, IMB
    A bucket graph based labelling algorithm for the resource constrained shortest path problem with applications to vehicle routing

  • Le 2 mars 2018 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Benoît Henry\, École des Mines de Nancy
    Grandes déviations pour l'étude de limites d'échelle de modèles déterministes de la dynamique adaptative
    Nous allons nous intéresser à une limite d'échelle d'une équation aux dérivées partielles modélisant la dynamique d'une population structurée par un trait quantitatif et sujette à mutations. Dans la limite d'échelle des petites mutations et du temps long, ce type d'équation donne lieu à des équations de Hamilton-Jacobi avec contraintes (Dieckmann et al, 2005). Dans ce travail, nous donnons une représentation de la solution de cette EDP comme l'espérance d'une fonctionnelle d'un processus stochastique (mouvement Brownien si l'opérateur de mutation est un Laplacien). La limite d'échelle peut alors être étudiée grâce à des estimées de grandes déviations, et nous obtenons ainsi une caractérisation variationnelle du problème de Hamilton-Jacobi limite. Dans certain cas simples, nous sommes alors en mesure de démontrer l'unicité de la solution du problème variationnel.
  • Le 8 mars 2018 à 13:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Anna Bonnet\, Université Claude Bernard
    Heritability estimation in high-dimensional sparse linear mixed models
    The heritability of a biological quantitative feature is defined as the proportion of its variation that can be explained by genetic factors. We propose an estimator for heritability in high dimensional sparse linear mixed models and we study its theoretical properties. We highlight the fact that in the case where the size N of the random effects is too large compared to the number n of observations, a precise estimation for heritability cannot be provided. Since in practice almost all datasets verify the condition N >= n, we perform a variable selection method to reduce the size of the random effects and to improve the accuracy of heritability estimations. However, as shown by our simulations, this kind of approach only works when the number of non-zero components in the random effects (i.e. the genetic variants which have an impact on the phenotypic variations) is small enough. In face of this limitation, we proceeded to define an empirical criterion to determine whether it is possible or not to apply a variable selection approach. As an example of its use, we applied our method to estimate the heritability of the volume of several regions of the human brain.
  • Le 26 mars 2018 à 13:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Anaïs Rouanet\, University of Cambridge
    Modèles de mélange conjoints pour données longitudinales et temps d'évènement : Application à l'étude du déclin cognitif et de la démence.
    Les modèles conjoints permettent d'explorer l'association entre un marqueur longitudinal et un temps d'évènement. Dans les études sur le vieillissement cognitif, il est également important de tenir compte du risque compétitif de décès car la démence et le décès ont des facteurs de risque communs. De plus, le temps de démence est censuré par intervalle dans les études de cohortes, le diagnostic ne pouvant être posé que lors des visites médicales prévues. Les sujets avec démence peuvent alors décéder avant la visite suivant l'apparition de la démence, sans être diagnostiqués. Par ailleurs, les modèles de mélange permettent de capturer l'hétérogénéité du déclin cognitif au sein de la population, identifiant des sous-groupes d'individus ayant des caractéristiques communes, appelés classes latentes. J'ai proposé une extension des modèles conjoints à classes latentes pour données longitudinales et temps d'évènement, traitant la censure par intervalle et les risques compétitifs simultanément. Ce modèle permet d'identifier des profils d'évolution cognitives associés à des risques de démence et décès spécifiques. Cependant, le nombre de classes latentes doit être fixé a priori et est optimisé via un critère non consensuel. L'extension des modèles de mélange conjoints au cadre bayésien apporte une flexibilité de modélisation et permet notamment d'estimer directement le nombre de classes. Ce genre de modèles peut aider à la compréhension de maladies chroniques ou servir au développement d'outils pronostiques.
  • Le 30 mars 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Boris Detienne IMB\, Université de Bordeaux
    Research ideas on combinatorial min-max regret with decision diagrams
    Open discussion about min-max regret combinatorial optimization problems with objective interval uncertainty.
  • Le 21 septembre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Aurelien Froger IMB\, Universite de Bordeaux
    Modelling and solving a stochastic generation and transmission expansion planning problem faced by RTE

  • Le 28 septembre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Liding Xu Optimix\, Ecole Polytechnique
    Cutting planes for signomial programming
    Cutting planes are of crucial importance when solving nonconvex nonlinear programs to global optimality, for example using the spatial branch-and-bound algorithms. In this paper, we discuss the generation of cutting planes for signomial programming. Many global optimization algorithms lift signomial programs into an extended formulation such that these algorithms can construct relaxations of the signomial program by outer approximations of the lifted set encoding nonconvex signomial term sets, i.e., hypographs, or epigraphs of signomial terms. We show that any signomial term set can be transformed into the subset of the difference of two concave power functions, from which we derive two kinds of valid linear inequalities. Intersection cuts are constructed using signomial term-free sets which do not contain any point of the signomial term set in their interior. We show that these signomial term-free sets are maximal in the nonnegative orthant, and use them to derive intersection sets. We then convexify a concave power function in the reformulation of the signomial term set, resulting in a convex set containing the signomial term set. This convex outer approximation is constructed in an extended space, and we separate a class of valid linear inequalities by projection from this approximation. We implement the valid inequalities in a global optimization solver and test them on MINLPLib instances. Our results show that both types of valid inequalities provide comparable reductions in running time, number of search nodes, and duality gap.
  • Le 12 octobre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Pierre Pesneau IMB\, Universite de Bordeaux
    Problème du sous-graphe cordal maximum

  • Le 19 octobre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Theo Guyard Centre Inria de l’Université de Rennes
    A discrete optimization approach to L0-norm problems
    Problems involving the L0-norm have long been considered too difficult to solve, despite their practical interest. Recently, researchers from the discrete optimization community have shifted their attention towards this class of problems. Using tools traditionally employed in their field, they have unlocked the possibility to handle L0-norm problems in practice. In a first part, this talk will outline the applications of problems involving the L0-norm, their advantages and flaws, and will briefly review the history of the recent research works on this topic. Then, discrete optimization tools that can be used to address this type of problem will be presented. Specifically, we will focus on generic Mixed-Integer Program solvers and specialised Branch-and-Bound algorithms. Existing softwares for practitioners will also be discussed. In a last part, we will provide insights into the ongoing research directions regarding L0-norm problems and about the different researchers and laboratories involved.
  • Le 26 octobre 2023 à 10:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Yue Su Ecole des Ponts ParisTech
    Heuristic and Exact Algorithms for Solving the Electric Autonomous Dial-A-Ride Problem
    We propose highly efficient heuristic and exact algorithms to solve the Electric Autonomous Dial-A-Ride Problem (E-ADARP), which consists in designing a set of minimum-cost routes that accommodates all customer requests for a fleet of Electric Autonomous Vehicles (EAVs). The E-ADARP has two important features: (i) the employment of EAVs and a partial recharging policy; (ii) the weighted-sum objective function that minimizes the total travel time and the total excess user ride time. We first propose a Deterministic Annealing (DA) algorithm to solve the E-ADARP. Partial recharging (i) is handled by an exact route evaluation scheme of linear time complexity. To tackle (ii), we propose a new method that allows effective computations of minimum excess user ride time by introducing a fragment-based representation of paths. To validate the performance of the DA algorithm, we compare our algorithm results to the best-reported Branch-and-Cut (B&C) algorithm results on existing instances. Our DA algorithm provides 25 new best solutions and 45 equal solutions for 84 existing instances. To test the algorithm’s performance on larger-sized instances, we establish new instances with up to 8 vehicles and 96 requests, and we provide 19 new solutions for these instances. Then, we present a highly efficient CG algorithm, which is integrated into the Branch-and-price (B&P) scheme to solve the E-ADARP exactly. Our CG algorithm relies on an effective labeling algorithm to generate columns with negative reduced costs. In the extension of labels, the key challenge is determining all excess-user-ride-time optimal schedules to ensure finding the minimum-negative-reduced-cost route. To handle this issue, we apply the fragment-based representation and propose a novel approach to abstract fragments to arcs while ensuring excess-user-ride-time optimality. We then construct a new graph that preserves all feasible routes of the original graph by enumerating all feasible fragments, abstracting them to arcs, and connecting them with each other, depots, and recharging stations in a feasible way. On the new graph, we apply strong dominance rules and constant-time feasibility checks to compute the shortest paths efficiently. In the computational experiments, we solve 71 out of 84 instances optimally, improve 30 previously reported lower bounds, and generate 41 new best solutions on previously solved and unsolved instances.
  • Le 26 octobre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Luca Ferrarini Ecole des Ponts ParisTech
    A polyhedral approach to the total matching problem
    A total matching of a graph G = (V,E) is a subset of G such that its elements, i.e. vertices and edges, are pairwise not adjacent. In this context, the Total Matching Problem calls for a total matching of maximum size. This problem generalizes both the Stable Set Problem, where we look for a stable set of maximum size and the Matching Problem, where instead we look for a matching of maximum size. In this talk, we present a polyhedral approach to the Total Matching Problem, and hence, we introduce the corresponding polytope, namely the Total Matching Polytope. To this end, we will present several families of nontrivial valid inequalities that are facet-defining for the Total Matching Polytope. In addition, we provide a first linear complete description for trees and complete bipartite graphs. For the latter family, the complete characterization is obtained by projecting a higher-dimension polytope onto the original space. This leads to also an extended formulation of compact size for the Total Matching Polytope of complete bipartite graphs.
  • Le 9 novembre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Anton Medvedev Cedric-CNAM
    Finite adaptability for robust optimization applied to the glass industry
    Finite adaptability is a resolution framework for two-stage robust optimization problems, introduced in 2010 by Bertsimas and Caramanis. We apply this resolution scheme for a glass industry production problem. The latter deals with coated glass production, in which thin layers of materials are deposited on the galss sheets, using a magnetron, to give it visual and thermal proprieties. The exact production plan is uncertain and the magnetron maintenance decisions have to be taken before the realization of the uncertaitny.
    Along with the description and formulation of the above stated industrial problem and of its resolution scheme, the presentation will address two theoretical results in the context of finite adaptability. The first one is the convergence of finite adaptability to complete adaptability under some continuity assumption. The second one is a specific setting in which finite adaptability is solved in polynomial time.
  • Le 16 novembre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Aurelien Froger Universite de Bordeaux
    Groupe de travail on "DeLuxing: Deep Lagrangian Underestimate Fixing for Column-Generation-Based Exact Methods"

  • Le 23 novembre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Cristina Molero Ecole Polytechnique
    Classification and regression trees via mathematical optimization
    Contrary to classic classification and regression trees, built in a greedy heuristic manner, designing the tree model through an optimization problem allows us to easily include desirable properties in Machine Learning in addition to prediction accuracy. In this talk, we present a Non-Linear Optimization approach that is scalable with respect to the size of the training sample, and illustrate this flexibility to model several important issues in Explainable and Fair Machine Learning. These include sparsity, as a proxy for interpretability, by reducing the amount of information necessary to predict well; fairness, by aiming to avoid predictions that discriminate against sensitive features such as gender or race; the cost-sensitivity for groups of individuals in which prediction errors are more critical, such as patients of a disease, by ensuring an acceptable accuracy performance for them; local explainability, where the goal is to identify the predictor variables that have the largest impact on the individual predictions; as well as data complexity in the form of observations of functional nature. The performance of our approach is illustrated on real and synthetic data sets.
  • Le 30 novembre 2023 à 10:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Daniil Khachai Kedge Business School
    Efficient Algorithms for Routing Problems with Specific Constraints
    This thesis focuses on algorithmic design for three combinatorial optimization problems related to transportation, logistics and production research with specific types of industrial constraints. First, we consider the Precedence Constrained Generalized Traveling Salesman Problem (PCGTSP). This problem is an extension of two well-known combinatorial optimization problems — the Generalized Traveling Salesman Problem (GTSP) and the Precedence Constrained Asymmetric Traveling Salesman Problem (PCATSP), whose path version is known as the Sequential Ordering Problem (SOP).

    Similarly to the classic GTSP, the goal of the PCGTSP is to find for a given input digraph and partition of its node set into clusters a minimum cost cyclic route (tour) visiting each cluster in a single node. In addition, as in the PCATSP, feasible tours are restricted to visit the clusters with respect to the given partial order. Unlike the GTSP and SOP, to the best of our knowledge, the PCGTSP still remain to be weakly studied both in terms of polyhedral theory and algorithms. In this thesis, for the first time for the PCGTSP, we propose several families of valid inequalities, establish dimension of the PCGTS polytope and prove sufficient conditions ensuring that the extended Balas’ pi- and sigma-inequalities become facet-inducing. Relying on these theoretical results and existing algorithmic approaches for the PCATSP and SOP, we introduce a family of MILP-models and several variants of the branch-and-cut algorithm for the PCGTSP. We study their performance on the instances of the public benchmark library PCGTSPLIB, a known adaptation of the classic SOPLIB to the problem in question. The obtained results show the efficiency of the algorithm. The paper was published in European Journal of Operational Research.

    Our second research topic is related to a specific industrial application of the PCGTSP - the discrete Cutting Path Problem (CPP). In this problem, we aimed to find an optimal path for a cutting tool, in order to minimize the total processing cost including cutting, air-motion, piercing, and other expenses, subject to constraints induced by industrial cutting restrictions. It is convenient to consider such restrictions in terms of precedence constraints. We introduce a general solution framework for CPP that includes: (i) the universal reduction approach for numerous variants of this problem to the Precedence Constrained Generalized Traveling Salesman Problem; (ii) methodological support for finding (sub-) optimal solutions of this problem on the basis of branch-and-cut algorithm and PCGLNS meta-heuristic. The results of computational experiments show the efficiency of the proposed framework for solving industrial instances of the problem. The paper was submitted to International Journal of Production Research.

    Finally, we tackle the Capacitated Vehicle Routing Problem (CVRP). CVRP is strongly NP-hard (even on the Euclidean plane), hard to approximate in general case and APX-complete for an arbitrary metric. However, for the geometric settings of the problem, there is a number of known quasi-polynomial and even polynomial time approximation schemes. Among these results, the well-known Quasi-Polynomial Time Approximation Scheme (QPTAS) proposed by A. Das and C. Mathieu appears to be the most general. In this thesis, we propose the first extension of this scheme to a more wide class of metric spaces. Actually, we show that the metric CVRP has a QPTAS any time when the problem is set up in the metric space of any fixed doubling dimension d > 1 and the capacity does not exceed polylog(n). The paper was published in Journal of Global Optimization.
  • Le 14 décembre 2023 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Francois Clautiaux Universite de Bordeaux
    Models and algorithms for configuring and testing prototype cars
    In this presentation, we consider a new industrial problem, which occurs in the context of the automobile industry. This problem occurs during the testing phase of a new vehicle. It involves determining all the variants of the vehicle to be manufactured in order to carry out these tests, and scheduling these tests over time. We model this problem as a new scheduling scheduling problem. Given a set of machines, and a set of jobs, we seek a fixed configuration for each machine (i.e. a set of values for various parameters), and an assignment of jobs to machines along the time horizon that respects compatibility constraints between jobs and machine configurations. Two objectives are lexicographically optimized: the number of late jobs, and the number of machines used. This problem involves a notion of configuration that is not addressed in the literature. First we prove that even finding a feasible solution for the problem is NP-hard, and characterize the cases where compatibility constraints amount to ensuring that only pairwise compatible jobs are assigned to each machine. We then propose a mathematical model for this problem, and a reformulation into a path-flow formulation. To deal with the new notion of configuration, we propose a refined labelling algorithm embedded in a column-and-row generation algorithm to generate primal and dual bounds from this formulation. We conducted computational experiments on industrial data from Renault, and compared our results with those obtained by solving a constraint programming model provided by the company. Our approach finds better solutions than those obtained by the company, and proves the optimality for all instances of our benchmark for the first objective function. We also obtain small optimality gaps for the second objective function.
  • Le 11 janvier 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Gautier Stauffer HEC Lausanne
    Horizontal collaboration in forestry: game theory models and algorithms for trading demands
    We introduce a new cooperative game theory model that we call production-distribution game to address a major open problem for operations research in forestry, raised by Rönnqvist et al. in 2015, namely, that of modelling and proposing efficient sharing principles for practical collaboration in transportation in this sector. The originality of our model lies in the fact that the value/strength of a player does not only depend on the individual cost or benefit of the goods she owns but also on her market shares (customers demand). We show that our new model is an interesting special case of a market game introduced by Shapley and Shubik in 1969. As such it exhibits the property of having a non-empty core. We prove that we can compute both the nucleolus and the Shapley value efficiently, in a nontrivial, interesting special case. We provide two algorithms to compute the nucleous: a simple separation algorithm and a fast primal-dual one. We also show that our results can be used to tackle more general versions of the problem and we believe that our contribution paves the way towards solving the challenging open problems herein.
  • Le 18 janvier 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Mariam Sangare Universite de Montpellier
    A distributed scheduling method for demand response in energy communities with distributed generation and storage.
    We present a distributed optimization method for energy communities having distributed renewable generation and storage units. We explain how the resulting optimization problem can be cast as a bi-level optimization problem where the followers solve mixed-integer linear programs. Given the difficulty of these problems, we develop a heuristic where incentives are sent to the followers. Specifically, to maximize the collective-self consumption rate, we exploit the notion of allocation key traditionally used a posteriori for economic gain sharing to build an a priori incentive to demand response. The proposed method returns high-quality solutions to the optimal centralized real-world benchmark instances constructed over a month with accurate historical data from our demonstrator in South France.
  • Le 22 janvier 2024 à 14:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    François Lamothe LAAS-CNRS\, Toulouse
    Idées et pistes pour l'amélioration des méthodes de décomposition en RO
    Les méthodes de décompositions sont des algorithmes d'optimisation largement utilisés en Recherche Opérationnelle. Malgré leur ancienneté, de nombreuses questions les concernant restent sans réponse certaine. Dans cette présentation, nous ferons une présentation géométrique de deux méthodes de décomposition (Dantzig-Wolfe et Benders) avant de proposer des pistes d'amélioration. Nous aborderons en particulier :
    - Le choix des coupes dans les méthodes de coupe type Benders
    - La dégénérescence dans la méthode de Dantzig-Wolfe
    - L'utilisation et l'interaction d'approximation interne et externe de polyèdres dans la méthode de Dantzig-Wolfe.
  • Le 25 janvier 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Artur Pessoa Universidade Federal Fluminense (Brasil)
    New Results on Vehicle Routing Problems with Special Demand Structures
    In this work, we present new results of the ongoing work performed with two postgraduate students in UFF, Brazil. Both consider variants of the vehicle routing problem with special demand structures. The first is a new family of capacity-like cuts for a variant where visited points (facilities) differ from the demand points. In this variant, called m-CTP, each facility has an associated subset of demand points that can be attended by it. Experiments show that the proposed cut family allows for reducing the time required to solve some literature instances by more than one order of magnitude. The second result considers the vehicle routing problem with split deliveries. For this variant, we present a polynomial algorithm to recognise whether a set of graph edge multiplicities corresponds to a feasible solution or not. The new algorithm relies on a special property of the edge multiplicities that can be easily enforced by cuts in an MIP formulation. For general edge multiplicities, the recognition problem has been known as NP-hard for several years.
  • Le 15 février 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Alexandre Genadot IMB
    Principe d'estimation et de contrôle pour les processus de décision markoviens en temps continu

  • Le 14 mars 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Henri Lefebvre Department of Mathematics\, University Trier
    A Column-and-Constraint Generation Algorithm for Bilevel Optimization
    Bilevel optimization is a powerful tool for modeling complex hierarchical decisions. In particular, it addresses situations in which decision makers must take into account
    other agents having an independent utility function while making their decisions.
    Though challenging on their own, a lot of effort has been conducted by the research community to solve linear, at most convex, bilevel problems,
    in such a way that it is now conceivable to solve much larger problems than what was conceivable years ago.
    Unfortunately, most of the existing approaches rely on duality theory and, therefore, cannot be generalized to the nonconvex setting, occurring when all players solve a
    nonconvex problem. In this talk, we introduce a new algorithm for solving a class of bilevel optimization problems with nonconvex followers and binary leader decisions.
    This algorithm is based on approximating bilevel problems by generalized two-stage robust problems for which we design a column-and-constraint generation approach.
    Additionally, we present a stabilization scheme for our method and report some early computational experiments.
  • Le 21 mars 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Emmanuel Hebrard LAAS-CNRS-Toulouse
    An Efficient Constraint Programming Approach to Preemptive Job Shop Scheduling
    Constraint Programming (CP) has been widely, and very successfully, applied to scheduling problems. However, the focus has been on uninterruptible tasks, and preemptive scheduling problems are typically harder for existing constraint solvers. Indeed, one usually needs to represent all potential task interruptions thus introducing many variables and symmetrical or dominated choices.
    I will talk about CP approaches to both non-preemptive and preemptive job shop scheduling, and then I will show how methods for the former can be applied with very little change to the latter. The resulting approach does not require an explicit model of task interruptions and as a result significantly outperforms state-of-the-art dedicated approaches in our experimental results.
  • Le 28 mars 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Romain Guillaume IRIT\, Universite Toulouse - Jean Jaures
    Décision séquentielle sous incertitude
    L’incertitude est inhérente au problème de décision. Dans de nombreux contextes de conception de système de production : conception d’ateliers de fabrication additive, de chaînes logistiques inverses et de planification minière, de chaines d’assemblages aéronautiques, les décisions ne sont pas toutes prisent simultanément mais de façon séquentielle dépendant de la situation dans laquelle on se trouve et du niveau de connaissance. La connaissance sur les mondes possibles peut être plus ou moins précise. En fonction de ce degré de connaissance, différentes théories comme celles des probabilités, des fonctions de croyance, des possibilités peuvent être utilisées. L’aspect séquentielle des décisions complexifie l’adoption d’une stratégie optimale. Dans cette exposé, je présenterai dans une première partie les spécificités de la décision séquentielle sous incertitude, puis proposerai des nouveaux critères qui permettent de prendre en compte le comportement fasse a l'incertitude du décideur dans le cas d’incertitude totale et incertitude possibiliste. Une deuxième partie présentera deux théorèmes d’impossibilité permettant de mieux comprendre les difficultés de la décision séquentielle sous incertitude avec des fonctions de croyance. Et le premier critère satisfaisant les propriétés de la décision séquentielle avec fonction de croyance sera présenté.
  • Le 11 avril 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Junkai He Télécom SudParis
    Maintenance Optimization for Complex Manufacturing Systems Using Stochastic Remaining Useful Life Prognostics
    We leverage Remaining Useful Life (RUL) prognostic information for preventive maintenance planning in complex manufacturing factories. Such a factory consists of multiple complex systems that use redundant components as backups to ensure system availability. The purpose is to minimize production fluctuations in the factory due to maintenance or system breakdown. To achieve this, we propose Mixed-Integer Linear Programming (MILP) models to minimize the overall production loss. Furthermore, we incorporate random RUL decrease rates according to reliability theory and extend the MILP model using chance-constrained programming. A novel approximation method for dealing with chance constraints is proposed and approximation properties are analyzed. Computational results demonstrate the efficacy of the proposed methods in maintenance planning under different levels of uncertainty.
  • Le 23 mai 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Boris Detienne Universite de Bordeaux
    The Benders-by-batch algorithm to solve two-stage stochastic linear programs

    In this talk, we will introduce a new exact algorithm to solve two-stage stochastic linear programs. Based on the multicut Benders reformulation of such problems, with one subproblem for each scenario, this method relies on a partition of the subproblems into batches. The key idea is to solve at most iterations only a small proportion of the subproblems by detecting as soon as possible that a first-stage candidate solution cannot be proven optimal. We also propose a general framework to stabilize our algorithm, and show its finite convergence and exact behavior. We report an extensive computational study on large-scale instances of stochastic optimization literature that shows the efficiency of the proposed algorithm compared to nine alternative algorithms from the literature. We also obtain significant additional computational time savings using the primal stabilization schemes.


  • Le 30 mai 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Ayse N. Arslan Centre Inria de l'Universite de Bordeaux
    A tutorial on decision rules for robust adjustable optimization

    Adjustable robust optimization problems, as a subclass of multi-stage optimization under uncertainty problems, constitute a class of problems that are very difficult to solve in practice. Although the exact solution of these problems under certain special cases may be possible, for the general case, there are no known exact solution algorithms. Instead, approximate solution methods have been developed, often restricting the functional form of recourse actions, these are generally referred to as “decision rules“. In this talk, we will present a review of existing decision rule approximations including affine and extended affine decision rules, uncertainty set partitioning schemes and finite-adaptability. We will discuss the reformulations and solution algorithms that result from these approximations. We will point out existing challenges in practical use of these decision rules, and identify current and future research directions. When possible we will emphasize the connections to multi-stage stochastic programming literature.


  • Le 6 juin 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Eduardo Moreno Universidad Adolfo Ibáñez\, Santiago\, Chile/Google Research\, Paris\, France
    Benders Adaptive-Cuts Method for Two-Stage Stochastic Programs

    Two-stage stochastic programs (TSSP) are a classic model where a decision must be made before the realization of a random event, allowing recourse actions to be performed after observing the random values. For example, many classic optimization problems, like network flows or facility location problems, became TSSP if we consider, for example, a random demand. 

    Benders decomposition is one of the most applied methods to solve TSSP with a large number of scenarios. The main idea behind the Benders decomposition is to solve a large problem by replacing the values of the second-stage subproblems with individual variables, and progressively forcing those variables to reach the optimal value of the subproblems, dynamically inserting additional valid constraints, known as Benders cuts. Most traditional implementations add a cut for each scenario (multi-cut) or a single-cut that includes all scenarios. 

    In this paper we present a novel Benders adaptive-cuts method, where the Benders cuts are aggregated according to a partition of the scenarios, which is dynamically refined using the LP-dual information of the subproblems. This scenario aggregation/disaggregation is based on the Generalized Adaptive Partitioning Method (GAPM). We formalize this hybridization of Benders decomposition and the GAPM, by providing sufficient conditions under which an optimal solution of the deterministic equivalent can be obtained in a finite number of iterations. Our new method can be interpreted as a compromise between the Benders single-cuts and multi-cuts methods, drawing on the advantages of both sides, by rendering the initial iterations faster (as for the single-cuts Benders) and ensuring the overall faster convergence (as for the multi-cuts Benders). 

    Computational experiments on three TSSPs validate these statements, showing that the new method outperforms the other implementations of Benders method, as well as other standard methods for solving TSSPs, in particular when the number of scenarios is very large.


  • Le 20 juin 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Mickael Gaury Kedge BS
    An efficient 2-competitive online algorithm for kit update at MSF Logistique

    This paper explores strategic optimization in updating essential medical kits crucial for humanitarian emergencies. Recognizing the perishable nature of medical components, the study emphasizes the need for regular updates involving complex recovery, substitution and disposal processes with the associated costs. The goal is to minimize costs over an unpredictable time horizon. The introduction of the kit-update problem considers both deterministic and adversarial scenarios. Key performance indicators (KPIs), including updating time and destruction costs, are integrated into a comprehensive economic measure, emphasizing a strategic and cost-effective approach.

    The paper proposes an innovative online algorithm utilizing available information at each time period, demonstrating its 2-competitivity. Comparative analyses include a stochastic multi-stage approach and four other algorithms representing former and current MSF policies, a greedy improvement of the MSF policy, and the perfect information approach.

    Analytics results on various instances show that the online algorithm is competitive in terms of cost with the stochastic formulation, with differences primarily in computation time. This research contributes to a nuanced understanding of the kit-update problem, providing a practical and efficient online algorithmic solution within the realm of humanitarian logistics.


  • Le 26 juin 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Gael Guillot Inria Lille
    Tarification stratégique sur les marchés de l'électricité avec contraintes de pollution

    Dans cette présentation, nous exposons un problème bi-niveaux, multi leader- single follower, de tarification sur le marché de l'électricité. Les meneurs correspondent aux sociétés productrices d'énergie qui doivent soumettre une offre à un agent centralisateur (ISO). L'ISO sélectionne les offres et distribue la demande sur le réseaux. Les générateurs sont composés de plusieurs technologies de production, avec différents coûts et quantités de pollution produite. Nous exposerons les particularités de ce problème ainsi que les différents algorithmes qui permettent de trouver un ou plusieurs équilibres de Nash.


  • Le 18 septembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Lionel Rolland ENSTA-Paris
    Management of a battery on short term electricity markets

    Our work aims to quantify the benefit of storage flexibilities such as a battery on several short term electricity markets. We especially focus on two different markets, the intraday market (ID) and the activation market of the automatic Frequency Restoration Reserve (aFRR), also known as the secondary reserve. We propose algorithms to optimize the management of a small battery (<= 5 MWh) on these markets. In this talk, we first present the modeling of the problem, then we show some theoretical results and numerical simulations. We conclude by listing some limitations of the method.


  • Le 26 septembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Yanyu Zhou ENSTA Paris
    Information discovery in binary stochastic optimization

    In this talk we present an information discovery framework in optimization under uncertainty. In this framework, uncertain parameters are assumed to be “discoverable” by the decision-maker under a given discovery (query) budget. The decision-maker therefore decides which parameters to discover (query) in a first stage then solves an optimization problem with the discovered uncertain parameters at hand in a second stage. We model this problem as a two-stage stochastic program and develop decomposition algorithms for its solution. Notably, one of the algorithms we propose reposes on a Dantzig-Wolfe reformulation of the recourse problem. This allows to treat the cases where the recourse problem involves binary decisions without relying on integer L-Shaped cuts. In its implementation, this approach requires to couple Benders’ and Dantzig-Wolfe reformulation with the subproblems of the Benders’ algorithm being solved using the column generation algorithm. We present some preliminary results on the kidney exchange problem under uncertainty of the compatibility information.


  • Le 17 octobre 2024 à 11:15
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Jean Peyhardi Université de Montpellier
    (proba-stats) Polya urn models for multivariate species abundance data: properties and applications

    This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approaches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the assumption that the total abundance is fixed, and is then often used for microbiome data since the sequencing depth (in RNA seq) varies from one observation to another leading to a total abundance that is not really interpretable. We propose to generalize TDM model in two ways: by relaxing the fixed total abundance and by using Polya distribution instead of Dirichlet multinomial. This family of models corresponds to Polya urn models with a random number of draws and will be named Polya splitting distributions. In a first part I will present the probabilistic properties of such models, with focus on marginals and probabilistic graphical model. Then it will be shown that these models emerge as stationary distributions of multivariate birth death process under simple parametric assumption on birth-death rates. These assumptions are related to the neutral theory of biodiversity that assumes no biological interaction between species. Finally, the statistical aspects of Polya splitting models will be presented: the regression framework, the inference, the consideration of a partition tree structure and two applications on real data.


  • Le 24 octobre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Matteo Petris Ecole des Ponts ParisTech
    Robust approaches for the kidney exchange problem

    In the Kidney Exchange Problem (KEP), we consider a pool of altruistic donors and incompatible patient-donor pairs. 

    Kidney exchanges can be modelled in a directed weighted graph as circuits starting and ending in an incompatible pair or as paths starting at an altruistic donor.

    The weights on the arcs represent the medical benefit which measures the quality of the associated transplantation.

    For medical reasons, circuits and paths are of limited length and are associated with a medical benefit to perform the transplants.

    The aim of the KEP is to determine a set of disjoint kidney exchanges of maximal medical benefit or maximal cardinality (all weights equal to one).

    In this work, we consider two types of uncertainty in the KEP which stem from the estimation of the medical benefit (weights of the arcs) and from the failure of a transplantation (existence of the arcs).

    Both uncertainty are modelled via uncertainty sets with constant budget.

    The robust approach entails finding the best KEP solution in the worst-case scenario within the uncertainty set.

    We modelled the robust counter-part by means of a max-min formulation which is defined on exponentially-many variables associated with the circuits and paths.

    We propose different exact approaches to solve it: either based on the result of Bertsimas and Sim or on a reformulation to a single-level problem.

    In both cases, the core algorithm is based on a Branch-Price-and-Cut approach where the exponentially-many variables are dynamically generated.

    The computational experiments prove the efficiency of our approach.  



  • Le 31 octobre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Guillaume Joubert LS2N\, IMT Atlantique
    Decision aid for tactical transportation problems

    Due to the complexity of real-world planning processes addressed by major transportation companies, decisions are often made considering subsequent problems at the strategic, tactical, and operational planning phases. However, these problems still prove to be individually very challenging. This talk will present two examples of tactical transportation problems motivated by industrial applications: the Train Timetabling Problem (TTP) and the Service Network Scheduling Problem (SNSP). The TTP aims at scheduling a set of trains, months to years before actual operations, at every station of their path through a given railway network while respecting safety headways. The SNSP determines the number of vehicles and their departure times on each arc of a middle-mile network while minimizing the sum of vehicle and late commodity delivery costs. For these two problems, the consideration of capacity and uncertainty in travel times are discussed. We present models and solution approaches including MILP formulations, Tabu search, Constraint Programming techniques, and a Progressive Hedging metaheuristic.


  • Le 6 novembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Anthony Karahalios Carnegie Mellon University
    Column Elimination: An Iterative Approach to Solving Integer Programs

    Column elimination is an exact algorithm to solve discrete optimization problems via a 'column' formulation in which variables represent a combinatorial structure such as a machine schedule or truck route. Column elimination works with a relaxed set of columns, from which infeasible ones are iteratively eliminated. As the total number of columns can typically be exponentially large, we use relaxed decision diagrams to compactly represent and manipulate the set of columns. In this talk, we provide an introduction to the column elimination method and present recent algorithmic improvements resulting in competitive performance on large-scale vehicle routing problems. Specifically, our approach closes a large vehicle routing benchmark instance with 1,000 locations for the first time.


  • Le 21 novembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Arthur Leonard ENS Lyon
    Optimiser des fonctions de faible dimension sur les entiers

    On s'intéresse au problème d'optimiser une fonction objectif g(W x) + c^T x pour x entier, où chaque coordonnée de x est contrainte dans un intervalle. On suppose que la matrice W est à coefficient entiers de valeur absolue bornée par Delta, et qu'elle projette x sur un espace de petite dimension m << n. Ce problème est une généralisation du résultat de Hunkenschröder et al. dans lequel g est séparable convexe, et x est dans un 0-1 hypercube.


    On présentera un algorithme en complexité n^m (m Delta)^O(m^2), sous la supposition que l'on sache résoudre efficacement le problème lorsque n = m. Cet algorithme utilise les travaux d'Eisenbrand et Weismantel sur la programmation linéaire entière avec peu de contraintes.

    L'algorithme présenté peut être employé théoriquement dans plusieurs problèmes notamment la programmation mixte linéaire avec peu de contraintes, ou encore le problème du sac à dos où l'on doit acheter son sac.


  • Le 28 novembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Liding Xu Zuse Institute Berlin
    Towards an exact framework for quantum state separable problem

    Separable states are multipartite quantum states that can be written as a convex combination of product states. Product states are multipartite quantum states that can be written as a tensor product of states in each space. Quantum state separable problem is an NP-hard problem but fundamental for quantum information theory. We propose two relaxation techniques for this problem. In the view of commutative optimization, we treat the states as matrices of multilinear complex polynomials. Our relaxation technique is found similar to that for complex bilinear polynomials arising in the Alternating Current Optimal Power Flow problem. In the view of non-commutative optimization, we treat the states as tensor products of bounded Positive Semi-definite variables. We propose a generalized McCormick relaxations using linear matrix inequalities. These two relaxations will be the key component to drive an exact branch-and-cut algorithm.


  • Le 29 novembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Justine Cauvi ENS Lyon
    Landmark Hub Labeling: Improved Bounds and Faster Query Answering

    Hub Labeling (HL) is a state-of-the-art method for answering shortest-distance queries between node pairs in weighted graphs. It provides very fast query times but also requires considerable additional space to store the label information. Recently, a generalization of HL, called Landmark Hub Labeling (LHL), has been proposed, that conceptionally allows a storage of fewer label information without compromising the optimality of the query result. However, query answering with LHL was shown to be slower than with HL, both in theory and practice. Furthermore, it was not clear whether there are graphs with a substantial space reduction when using LHL instead of HL. 

    In this talk, we describe a new way of storing label information of an LHL such that query times are significantly reduced and then asymptotically match those of HL. We establish novel bounds between different labeling variants and provide a comparative experimental study between approximation algorithms for HL and LHL. We demonstrate that label sizes in an LHL are consistently smaller than those of HL across diverse benchmark graphs, including road networks.


  • Le 12 décembre 2024 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Thibault Prunet Ecole des Ponts ParisTech
    Decision-focused learning: theory, applications and recent algorithmic developments

    Real-world industrial applications frequently confront the task of decision-making under uncertainty. The classical paradigm for these complex problems is to use both machine learning (ML) and combinatorial optimization (CO) in a sequential and independent manner. ML learns uncertain quantities based on historical data, which are then used used as an input for a specific CO problem. Decision-focused learning is a recent paradigm, which directly trains the ML model to make predictions that lead to good decisions. This is achieved by integrating a CO layer in a ML pipeline, which raises several theoretical and practical challenges.

    In this talk, we aim at providing a comprehensive introduction to decision-focused learning. We will first introduce the main challenges raised by hybrid ML/CO pipelines, the theory of Fenchel-Young losses for surrogate differentiation, and the main applications of decision-focused learning. As a second objective, we will present our ongoing works that aim at developing efficient algorithms based on the Bregman geometry to address the minimization of the empirical regret in complex stochastic optimization problems.


  • Le 19 décembre 2024 à 14:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Paula Metzker HEC Montreal
    Optimisation distributionnellement robuste pour le problème de dimensionnement de lots multi-produits sous incertitude du rendement de la production.

    Cette recherche est menée pour examiner une approche d'optimisation distributionnellement robuste appliquée au problème de dimensionnement de lots avec des retards de production et une incertitude de rendement sous des ensembles d'ambiguïté par événement. Les ensembles d'ambiguïté basés sur les moments, Wasserstein et le clustering K-Means sont utilisés pour représenter la distribution des rendements. Des stratégies de décision statiques et statiques-dynamiques sont également considérées pour le calcul d'une solution. Dans cette présentation, la performance de différents ensembles d'ambiguïté sera présentée afin de déterminer un plan de production qui soit satisfaisant et robuste face aux changements de l'environnement. Il sera montré, à travers une expérience numérique, que le modèle reste traitable pour tous les ensembles d'ambiguïté considérés et que les plans de production obtenus demeurent efficaces pour différentes stratégies et contextes décisionnels.


  • Le 30 janvier 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Benoit Rottembourgh Centre Inria de Paris
    Some combinatorial and stochastic optimization problems that arise when auditing biases in online algorithms

    Marketplaces will account for more than 62% of global e-commerce sales by 2023, with their turnover increasing 4-fold in less than 10 years. At the same time, e-commerce continues to grow. By 2023, it will be worth 159.9 billion euros in France, accounting for 10% of retail sales. The recommendation algorithms - the famous buyboxes - of these marketplaces guide our choices. But what do we know about their loyalty? How do we know if they are deceiving us, if they are biased or if they are just maximizing the platform's turnover?


    We sought to show how to audit the fairness of these algorithms, seen as black boxes, by studying the case of Amazon, on several thousand products in France. We will show that the search for a prejudicial bias (for the consumer or for the platform's competitors) raises the question of finding a context of use in which the bias is statistically significant. 

    We will illustrate the combinatorial optimization problem that arises when the bias is ‘regional’ (neither global nor local to a decision) and when the sensitive variable (describing the discriminated population) is continuous and non-binary. 

    Finally, and more generally, we will describe the issues that arise when the query budget for the black-box algorithm is limited and when several auditors cooperate to detect bias more effectively.


  • Le 13 février 2025 à 11:15
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Julien Gibaud Université de Bordeaux
    (proba-stats) Supervised component-based generalized linear regression for the joint modeling of responses

    In this presentation, a response matrix (here, species abundances) is assumed to depend on explanatory variables (here, environmental variables) supposed many and redundant, thus demanding dimension reduction. The Supervised Component-based Generalized Linear Regression (SCGLR), a Partial Least Squares-type method, is designed to extract from the explanatory variables several components jointly supervised by the set of responses. However, this methodology still has some limitations we aim to overcome in this work. The first limitation comes from the assumption that all the responses are predicted by the same explanatory space. As a second limitation, the previous works involving SCGLR assume the responses independent conditional on the explanatory variables. Again, this is not very likely in practice, especially in situations like those in ecology, where a non-negligible part of the explanatory variables could not be measured. To overcome the first limitation, we assume that the responses are partitioned into several unknown groups. We suppose that the responses in each group are predictable from an appropriate number of specific orthogonal supervised components of the explanatory variables. The second work relaxes the conditional independence assumption. A set of few latent factors models the residual covariance matrix of the responses conditional on the components. The approaches presented in this work are tested on simulation schemes, and then applied on ecology datasets.


  • Le 20 février 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Mariam Sangare Inria Bordeaux - EDGE
    Cooperative game theory and cost allocation in energy communities

    An energy community (EC) is a legal entity involving prosumers and consumers who produce, consume, and exchange energy. The members of these communities can cooperate to maximize the community’s social welfare. In practice, this naturally raises the question of cost sharing in the community, as the members may have different contributions to social welfare. In this presentation, we empirically highlight the benefits of cooperation for the community and the individual members. Then, we present some cost-sharing mechanisms that guarantee fairness and the stability of the grand coalition composed of all prosumers and consumers. Finally, we present some results on instances built with real-world data from our partner Sween’s demonstrator, Smart Lou Quila, in South France.


  • Le 20 février 2025 à 11:30
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Luca Brunod EDF R&D
    Anchor-robust project scheduling with non-availability periods

    When planning large projects, it is convenient to rely on a baseline schedule, which gives a preview of logistical requirements. However, project data is often uncertain and jobs may be interrupted due to fortuitous causes of unavailability. Disruptions occurring between the planning and the execution of the project may cause the baseline schedule to become unfeasible. Part of the project must then be rescheduled, possibly entailing large costs.

    Anchor robustness is a two-stage robust approach that aims at computing a baseline schedule in which a subset of so-called anchored jobs have their starting times guaranteed against any disruption in a given uncertainty set. Anchor-robust scheduling, that was previously studied for processing time uncertainty, is extended to the case of uncertain unavailability periods.


  • Le 13 mars 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    François Hamonic Institut méditerranéen de biodiversité et d'écologie marine et continentale (IMBE)
    Optimisation de la connectivité écologique des réseaux d'habitats : PLNE et prétraitement des plus courts chemins

    Un paysage écologique peut être modélisé comme un graphe dirigé G=(V,A) dont les sommets représentent les zones d'habitat du paysage et les arcs représentent les connexions entre ces zones. Chaque sommet possède un poids indiquant la qualité écologique de la zone qu'il représente et chaque arc est associé à une longueur qui représente la difficulté pour un organisme d'effectuer le déplacement correspondant. La Probabilité de Connectivité du paysage est calculée à partir des distances de plus court chemin dans ce graphe pondéré et est souvent utilisée par les écologues pour évaluer la connectivité du paysage et identifier les zones à prioriser pour la conservation ou la restauration.

    Nous nous intéressons au problème de la maximisation de la Probabilité de Connectivité d'un paysage sous contrainte budgétaire, c'est à dire la recherche de la meilleure combinaison d'options d'aménagement parmi un ensemble donné, chaque aménagement étant modélisé par une modification des pondérations du graphe.

    Nous donnons une formalisation en PLNE pour ce problème et proposons une technique de prétraitement des plus courts chemins permettant de réduire significativement la taille des programmes linéaires à résoudre. Pour mettre en oeuvre ce prétraitement de manière efficace, nous donnons un algorithme en temps O(|A| + |V| log |V|) pour résoudre le problème suivant : étant donné un ensemble de scénarios caractérisés par le choix des longueurs possibles des arcs et un arc (u,v), calculer l'ensemble des sommets t tel que (u,v) est sur un plus court chemin de u à t pour tout scénario.


  • Le 18 mars 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Ksenia Bestuzheva Zuse-Institut Berlin
    New perspectives on invexity and its algorithmic applications
    One of the key properties of convex problems is that every stationary point is a global optimum, and nonlinear programming algorithms that converge to local optima are thus guaranteed to find the global optimum. However, some nonconvex problems possess the same property. This observation has motivated research into generalizations of convexity. This talk proposes a new generalization which we refer to as optima-invexity: the property that only one connected set of optimal solutions exists. We state conditions for optima-invexity of unconstrained problems and discuss structures that are promising for practical use, and outline algorithmic applications of these structures.



  • Le 27 mars 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Pierre Montalbano Laboratoire d'Informatique Fondamentale et Appliquée de Tours (LIFAT)
    Logic-Based Benders Decomposition pour la Logistique des Circuits Courts Alimentaires et de Proximité

    En Europe, les circuits courts alimentaires sont reconnus comme une pierre angulaire de la transition vers un système alimentaire plus durable. De nouveaux acteurs émergent pour faciliter les relations entre producteurs et institutions comme les services de restauration. Ils opèrent souvent dans une zone restreinte, avec une connaissance précise des stocks, capacités de livraison des producteurs et besoins en produits des services de restauration. Leur rôle est d’allouer les commandes aux producteurs et de gérer leur distribution, parfois via une plateforme. L'objectif est de minimiser les coûts logistiques (distance totale parcourue) tout en maximisant la préférence des clients pour les produits les plus locaux possibles. Prendre de bonnes décisions dans ce contexte est une tâche complexe en raison de plusieurs contraintes comme les capacités de livraison, les échéances et la périssabilité des produits frais. De plus, l’attribution des commandes aux producteurs influence intrinsèquement la livraison.

    Dans cette étude, nous proposons un modèle MILP pour ce problème et développons une méthode exacte basée sur la décomposition logique de Benders. Pour améliorer l’efficacité, nous intégrons des stratégies pour accélérer la convergence. Avec une étude de cas réelle (Manger Bio Centre-Val de Loire), nous évaluons notre méthode sur des instances semi-aléatoires de taille raisonnable. Les résultats montrent que notre approche surpasse largement la résolution directe du MILP avec CPLEX.


  • Le 3 avril 2025 à 10:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Roberto Bargetto Politecnico di Torino
    Iterated Inside Out and Its Specialized Variant for DOTmark Instances: A New Exact Algorithm for the Transportation Problem

    The transportation problem has recently gained renewed interest due to its applications in machine learning and artificial intelligence, particularly in measuring distances between images.

    To address the associated computational challenges, we propose a novel algorithm, Iterated Inside Out, for the exact resolution of the well-known transportation problem.

    The core of our algorithm requires an initial basic feasible solution and consists of two iteratively repeated phases until an optimal basic feasible solution is obtained.

    In the "inside" phase, the algorithm progressively improves the current solution by increasing the value of non-basic variables with negative reduced costs, typically yielding a feasible solution that is non-basic and interior to the constraint set polytope.

    In the "outside" phase, the algorithm improves the current solution by iteratively setting variables to zero until a new improved basic feasible solution is reached.

    Building on this core algorithm, we develop a specialized variant tailored for image processing.

    Extensive computational experiments demonstrate that our approach significantly outperforms commercial solvers such as CPLEX and Gurobi, as well as other exact algorithms in the literature.

    We demonstrate its superiority on both randomly generated instances and the DOTmark benchmark, a standard dataset for image processing and large-scale transportation problems.


  • Le 17 avril 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Patxi Flambard Centre Inria de l'Universite de Bordeaux/IMB
    A semi-infinite constraint generation algorithm for two-stage robust optimization problems

    In this talk we consider two-stage linear adjustable robust optimization problems with continuous and fixed recourse. 

    These problems have been the subject of exact solution approaches, notably, constraint generation (CG) and constraint-and-column generation (CCG). 

    Both approaches repose on an exponential-sized reformulation of the problem which uses a large number of constraints or constraints and variables.

    The decomposition algorithms then solve and reinforce a relaxation of the aforementioned reformulation through the iterations which require the solution of bilinear separation problems. 

    Here, we present an alternative approach reposing on a novel reformulation of the problem with an exponential number of semi-infinite constraints. We present a nested decomposition algorithm to deal with the exponential and semi-infinite natures of our formulation separately. 

    We argue that our algorithm will lead to a reduced number of bilinear separation problems solved while providing a high quality relaxation.

    We perform a detailed numerical study that showcases the superior performance of our proposed approach compared to the state-of-the-art and evaluates the contribution of different algorithmic components.


  • Le 15 mai 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Fulin Yan Institut de Mathematiques de Bordeaux-CATIE
    Machine Learning Alongside Beam Search for Resource Constrained Shortest Path Problems on State Transition Graphs

    Many combinatorial optimization (CO) problems can be formulated as resource-constrained shortest path problems (RCSPPs) on directed acyclic multigraphs, which represent state-transition graphs defined under the dynamic programming (DP) paradigm. The number of vertices, arcs, and resource constraints depends on the size of the original problem instance. This reformulation is NP-hard. Exact methods require high-quality primal bounds to converge efficiently.

    In this work, we focus on designing a generic constructive heuristic algorithm that can be applied to any problem once it is formulated as an RCSPP on a directed acyclic multigraph. Recent advances have demonstrated that combining machine learning (ML) with tree-based search strategies can enhance the solution of CO problems. Building on this idea, we propose an ML-enhanced beam search algorithm for RCSPPs. Our ML model leverages graph-based information to score candidate paths for expansion.

    The proposed method offers several advantages: (1) it is generic, (2) the ML model is independent of problem size, and (3) it requires no additional problem-specific information beyond the graph structure. We evaluate our approach on the single machine total tardiness problem and the temporal knapsack problem (TKP), yielding improved results compared to standard beam search without ML integration.


  • Le 22 mai 2025 à 11:15
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Lionel Truquet ENSAI
    (proba-stats) Chaînes de Markov en environnement aléatoire et applications en Econométrie ou en Machine Learning

    Pour l'analyse des séries temporelles ou pour l'étude de la convergence de certains algorithmes stochastiques tels que ceux basés sur la dynamique de Langevin, la dynamique de certains processus markoviens à temps discret est naturellement pertubée par l'introduction d'un désordre stochastique que l'on peut appeler bruit, facteur exogène ou environnement aléatoire suivant la terminologie utilisée dans les différents domaines. L'objectif de cet exposé est de montrer que l'on peut étudier la stabilité de ces chaînes de Markov en environnement aléatoire sur des espaces non bornés à partir de versions adaptées de conditions de stabilité plus classiques (condition de drift, de petit ensemble ou contraction d'applications aléatoires). Des conditions suffisantes pour l'existence de mesures stationnaires et ergodiques ainsi qu'un contrôle des coefficients de alpha-mélange pour ces processus peuvent être obtenus.  On peut alors en deduire une théorie limite pour des processus autorégressifs avec covariables exogènes ainsi que des estimés qualitatifs de la vitesse de convergence de certains algorithmes stochastiques.


  • Le 12 juin 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Igor Malheiros Université de Montpellier - Atoptima
    The robust time-dependent vehicle routing problem with time windows and budget uncertainty

    The vehicle routing problem with time windows (VRPTW) is a classical problem with many applications. To improve schedule accuracy, the models may consider time dependency in travel time. The real traffic data used as input to the optimizers are often obtained by location providers that predict the travel times for specified timeframes. The predictors estimate travel time based on historical data, where uncertainties from accidents, weather conditions, and working roads, may add unpredictable delays. These providers enhance their forecasts by monitoring live traffic and updating travel times throughout the day.

    Since routes are typically designed in advance, updates are hard to integrate once execution starts. In this context, this work introduces the robust time-dependent VRPTW (RTDVRPTW), which uses robust optimization to handle uncertainty in travel times.

    In this talk, we discuss the computational properties of RTDVRPTW under various assumptions, present exact and heuristic solution methods, and share results from real-data experiments to offer practical insights.


  • Le 10 juillet 2025 à 11:00
  • Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique
    Yasmine Beck Essec Business School
    A Brief Introduction to Bilevel Optimization and Uncertainty: Key Concepts and Some Recent Algorithmic Advances

    Bilevel optimization is a rather young but very active sub-field of mathematical optimization. The main reason is that bilevel optimization problems can serve as a powerful tool for modeling hierarchical decision-making processes, which arise in various real-world applications such as in critical infrastructure defense, transportation, or energy. However, the nested structure of bilevel problems makes them intrinsically hard to solve—even if all parameters of the problem are exactly known. Further challenges arise if problems under uncertainty are considered.

    In this talk, we begin with a brief introduction to the key concepts of bilevel optimization, covering the structure, characteristics, and commonly used reformulations of these problems. We then explore how techniques from robust optimization can be used to tackle bilevel problems under uncertainty. In particular, we highlight that the sources of uncertainty in bilevel optimization are much richer compared to classic, i.e., single-level, problems since not only the problem data but also the (observation of the) decisions of the two players can be subject to uncertainty.

    Finally, we discuss recent algorithmic advances for solving mixed-integer linear bilevel problems with a binary lower level and a Gamma-robust treatment of lower-level objective uncertainty. To solve these Gamma-robust bilevel problems, we present an exact and problem-tailored branch-and-cut approach as well as a heuristic framework. The latter relies on solving a linear number of deterministic bilevel problems so that no problem-specific tailoring is required.