Responsables : Ayse Nur Arslan et Frédéric Barraquand.
In this talk, we will introduce a new exact algorithm to solve two-stage stochastic linear programs. Based on the multicut Benders reformulation of such problems, with one subproblem for each scenario, this method relies on a partition of the subproblems into batches. The key idea is to solve at most iterations only a small proportion of the subproblems by detecting as soon as possible that a first-stage candidate solution cannot be proven optimal. We also propose a general framework to stabilize our algorithm, and show its finite convergence and exact behavior. We report an extensive computational study on large-scale instances of stochastic optimization literature that shows the efficiency of the proposed algorithm compared to nine alternative algorithms from the literature. We also obtain significant additional computational time savings using the primal stabilization schemes.
Adjustable robust optimization problems, as a subclass of multi-stage optimization under uncertainty problems, constitute a class of problems that are very difficult to solve in practice. Although the exact solution of these problems under certain special cases may be possible, for the general case, there are no known exact solution algorithms. Instead, approximate solution methods have been developed, often restricting the functional form of recourse actions, these are generally referred to as “decision rules“. In this talk, we will present a review of existing decision rule approximations including affine and extended affine decision rules, uncertainty set partitioning schemes and finite-adaptability. We will discuss the reformulations and solution algorithms that result from these approximations. We will point out existing challenges in practical use of these decision rules, and identify current and future research directions. When possible we will emphasize the connections to multi-stage stochastic programming literature.
Two-stage stochastic programs (TSSP) are a classic model where a decision must be made before the realization of a random event, allowing recourse actions to be performed after observing the random values. For example, many classic optimization problems, like network flows or facility location problems, became TSSP if we consider, for example, a random demand.
Benders decomposition is one of the most applied methods to solve TSSP with a large number of scenarios. The main idea behind the Benders decomposition is to solve a large problem by replacing the values of the second-stage subproblems with individual variables, and progressively forcing those variables to reach the optimal value of the subproblems, dynamically inserting additional valid constraints, known as Benders cuts. Most traditional implementations add a cut for each scenario (multi-cut) or a single-cut that includes all scenarios.
In this paper we present a novel Benders adaptive-cuts method, where the Benders cuts are aggregated according to a partition of the scenarios, which is dynamically refined using the LP-dual information of the subproblems. This scenario aggregation/disaggregation is based on the Generalized Adaptive Partitioning Method (GAPM). We formalize this hybridization of Benders decomposition and the GAPM, by providing sufficient conditions under which an optimal solution of the deterministic equivalent can be obtained in a finite number of iterations. Our new method can be interpreted as a compromise between the Benders single-cuts and multi-cuts methods, drawing on the advantages of both sides, by rendering the initial iterations faster (as for the single-cuts Benders) and ensuring the overall faster convergence (as for the multi-cuts Benders).
Computational experiments on three TSSPs validate these statements, showing that the new method outperforms the other implementations of Benders method, as well as other standard methods for solving TSSPs, in particular when the number of scenarios is very large.
This paper explores strategic optimization in updating essential medical kits crucial for humanitarian emergencies. Recognizing the perishable nature of medical components, the study emphasizes the need for regular updates involving complex recovery, substitution and disposal processes with the associated costs. The goal is to minimize costs over an unpredictable time horizon. The introduction of the kit-update problem considers both deterministic and adversarial scenarios. Key performance indicators (KPIs), including updating time and destruction costs, are integrated into a comprehensive economic measure, emphasizing a strategic and cost-effective approach.
The paper proposes an innovative online algorithm utilizing available information at each time period, demonstrating its 2-competitivity. Comparative analyses include a stochastic multi-stage approach and four other algorithms representing former and current MSF policies, a greedy improvement of the MSF policy, and the perfect information approach.
Analytics results on various instances show that the online algorithm is competitive in terms of cost with the stochastic formulation, with differences primarily in computation time. This research contributes to a nuanced understanding of the kit-update problem, providing a practical and efficient online algorithmic solution within the realm of humanitarian logistics.
Dans cette présentation, nous exposons un problème bi-niveaux, multi leader- single follower, de tarification sur le marché de l'électricité. Les meneurs correspondent aux sociétés productrices d'énergie qui doivent soumettre une offre à un agent centralisateur (ISO). L'ISO sélectionne les offres et distribue la demande sur le réseaux. Les générateurs sont composés de plusieurs technologies de production, avec différents coûts et quantités de pollution produite. Nous exposerons les particularités de ce problème ainsi que les différents algorithmes qui permettent de trouver un ou plusieurs équilibres de Nash.
Our work aims to quantify the benefit of storage flexibilities such as a battery on several short term electricity markets. We especially focus on two different markets, the intraday market (ID) and the activation market of the automatic Frequency Restoration Reserve (aFRR), also known as the secondary reserve. We propose algorithms to optimize the management of a small battery (<= 5 MWh) on these markets. In this talk, we first present the modeling of the problem, then we show some theoretical results and numerical simulations. We conclude by listing some limitations of the method.
In this talk we present an information discovery framework in optimization under uncertainty. In this framework, uncertain parameters are assumed to be “discoverable” by the decision-maker under a given discovery (query) budget. The decision-maker therefore decides which parameters to discover (query) in a first stage then solves an optimization problem with the discovered uncertain parameters at hand in a second stage. We model this problem as a two-stage stochastic program and develop decomposition algorithms for its solution. Notably, one of the algorithms we propose reposes on a Dantzig-Wolfe reformulation of the recourse problem. This allows to treat the cases where the recourse problem involves binary decisions without relying on integer L-Shaped cuts. In its implementation, this approach requires to couple Benders’ and Dantzig-Wolfe reformulation with the subproblems of the Benders’ algorithm being solved using the column generation algorithm. We present some preliminary results on the kidney exchange problem under uncertainty of the compatibility information.
This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approaches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the assumption that the total abundance is fixed, and is then often used for microbiome data since the sequencing depth (in RNA seq) varies from one observation to another leading to a total abundance that is not really interpretable. We propose to generalize TDM model in two ways: by relaxing the fixed total abundance and by using Polya distribution instead of Dirichlet multinomial. This family of models corresponds to Polya urn models with a random number of draws and will be named Polya splitting distributions. In a first part I will present the probabilistic properties of such models, with focus on marginals and probabilistic graphical model. Then it will be shown that these models emerge as stationary distributions of multivariate birth death process under simple parametric assumption on birth-death rates. These assumptions are related to the neutral theory of biodiversity that assumes no biological interaction between species. Finally, the statistical aspects of Polya splitting models will be presented: the regression framework, the inference, the consideration of a partition tree structure and two applications on real data.
In the Kidney Exchange Problem (KEP), we consider a pool of altruistic donors and incompatible patient-donor pairs.
Kidney exchanges can be modelled in a directed weighted graph as circuits starting and ending in an incompatible pair or as paths starting at an altruistic donor.
The weights on the arcs represent the medical benefit which measures the quality of the associated transplantation.
For medical reasons, circuits and paths are of limited length and are associated with a medical benefit to perform the transplants.
The aim of the KEP is to determine a set of disjoint kidney exchanges of maximal medical benefit or maximal cardinality (all weights equal to one).
In this work, we consider two types of uncertainty in the KEP which stem from the estimation of the medical benefit (weights of the arcs) and from the failure of a transplantation (existence of the arcs).
Both uncertainty are modelled via uncertainty sets with constant budget.
The robust approach entails finding the best KEP solution in the worst-case scenario within the uncertainty set.
We modelled the robust counter-part by means of a max-min formulation which is defined on exponentially-many variables associated with the circuits and paths.
We propose different exact approaches to solve it: either based on the result of Bertsimas and Sim or on a reformulation to a single-level problem.
In both cases, the core algorithm is based on a Branch-Price-and-Cut approach where the exponentially-many variables are dynamically generated.
The computational experiments prove the efficiency of our approach.
Due to the complexity of real-world planning processes addressed by major transportation companies, decisions are often made considering subsequent problems at the strategic, tactical, and operational planning phases. However, these problems still prove to be individually very challenging. This talk will present two examples of tactical transportation problems motivated by industrial applications: the Train Timetabling Problem (TTP) and the Service Network Scheduling Problem (SNSP). The TTP aims at scheduling a set of trains, months to years before actual operations, at every station of their path through a given railway network while respecting safety headways. The SNSP determines the number of vehicles and their departure times on each arc of a middle-mile network while minimizing the sum of vehicle and late commodity delivery costs. For these two problems, the consideration of capacity and uncertainty in travel times are discussed. We present models and solution approaches including MILP formulations, Tabu search, Constraint Programming techniques, and a Progressive Hedging metaheuristic.
Column elimination is an exact algorithm to solve discrete optimization problems via a 'column' formulation in which variables represent a combinatorial structure such as a machine schedule or truck route. Column elimination works with a relaxed set of columns, from which infeasible ones are iteratively eliminated. As the total number of columns can typically be exponentially large, we use relaxed decision diagrams to compactly represent and manipulate the set of columns. In this talk, we provide an introduction to the column elimination method and present recent algorithmic improvements resulting in competitive performance on large-scale vehicle routing problems. Specifically, our approach closes a large vehicle routing benchmark instance with 1,000 locations for the first time.
On s'intéresse au problème d'optimiser une fonction objectif g(W x) + c^T x pour x entier, où chaque coordonnée de x est contrainte dans un intervalle. On suppose que la matrice W est à coefficient entiers de valeur absolue bornée par Delta, et qu'elle projette x sur un espace de petite dimension m << n. Ce problème est une généralisation du résultat de Hunkenschröder et al. dans lequel g est séparable convexe, et x est dans un 0-1 hypercube.
On présentera un algorithme en complexité n^m (m Delta)^O(m^2), sous la supposition que l'on sache résoudre efficacement le problème lorsque n = m. Cet algorithme utilise les travaux d'Eisenbrand et Weismantel sur la programmation linéaire entière avec peu de contraintes.
L'algorithme présenté peut être employé théoriquement dans plusieurs problèmes notamment la programmation mixte linéaire avec peu de contraintes, ou encore le problème du sac à dos où l'on doit acheter son sac.
Separable states are multipartite quantum states that can be written as a convex combination of product states. Product states are multipartite quantum states that can be written as a tensor product of states in each space. Quantum state separable problem is an NP-hard problem but fundamental for quantum information theory. We propose two relaxation techniques for this problem. In the view of commutative optimization, we treat the states as matrices of multilinear complex polynomials. Our relaxation technique is found similar to that for complex bilinear polynomials arising in the Alternating Current Optimal Power Flow problem. In the view of non-commutative optimization, we treat the states as tensor products of bounded Positive Semi-definite variables. We propose a generalized McCormick relaxations using linear matrix inequalities. These two relaxations will be the key component to drive an exact branch-and-cut algorithm.
Hub Labeling (HL) is a state-of-the-art method for answering shortest-distance queries between node pairs in weighted graphs. It provides very fast query times but also requires considerable additional space to store the label information. Recently, a generalization of HL, called Landmark Hub Labeling (LHL), has been proposed, that conceptionally allows a storage of fewer label information without compromising the optimality of the query result. However, query answering with LHL was shown to be slower than with HL, both in theory and practice. Furthermore, it was not clear whether there are graphs with a substantial space reduction when using LHL instead of HL.
In this talk, we describe a new way of storing label information of an LHL such that query times are significantly reduced and then asymptotically match those of HL. We establish novel bounds between different labeling variants and provide a comparative experimental study between approximation algorithms for HL and LHL. We demonstrate that label sizes in an LHL are consistently smaller than those of HL across diverse benchmark graphs, including road networks.
Real-world industrial applications frequently confront the task of decision-making under uncertainty. The classical paradigm for these complex problems is to use both machine learning (ML) and combinatorial optimization (CO) in a sequential and independent manner. ML learns uncertain quantities based on historical data, which are then used used as an input for a specific CO problem. Decision-focused learning is a recent paradigm, which directly trains the ML model to make predictions that lead to good decisions. This is achieved by integrating a CO layer in a ML pipeline, which raises several theoretical and practical challenges.
In this talk, we aim at providing a comprehensive introduction to decision-focused learning. We will first introduce the main challenges raised by hybrid ML/CO pipelines, the theory of Fenchel-Young losses for surrogate differentiation, and the main applications of decision-focused learning. As a second objective, we will present our ongoing works that aim at developing efficient algorithms based on the Bregman geometry to address the minimization of the empirical regret in complex stochastic optimization problems.
Cette recherche est menée pour examiner une approche d'optimisation distributionnellement robuste appliquée au problème de dimensionnement de lots avec des retards de production et une incertitude de rendement sous des ensembles d'ambiguïté par événement. Les ensembles d'ambiguïté basés sur les moments, Wasserstein et le clustering K-Means sont utilisés pour représenter la distribution des rendements. Des stratégies de décision statiques et statiques-dynamiques sont également considérées pour le calcul d'une solution. Dans cette présentation, la performance de différents ensembles d'ambiguïté sera présentée afin de déterminer un plan de production qui soit satisfaisant et robuste face aux changements de l'environnement. Il sera montré, à travers une expérience numérique, que le modèle reste traitable pour tous les ensembles d'ambiguïté considérés et que les plans de production obtenus demeurent efficaces pour différentes stratégies et contextes décisionnels.
Marketplaces will account for more than 62% of global e-commerce sales by 2023, with their turnover increasing 4-fold in less than 10 years. At the same time, e-commerce continues to grow. By 2023, it will be worth 159.9 billion euros in France, accounting for 10% of retail sales. The recommendation algorithms - the famous buyboxes - of these marketplaces guide our choices. But what do we know about their loyalty? How do we know if they are deceiving us, if they are biased or if they are just maximizing the platform's turnover?
We sought to show how to audit the fairness of these algorithms, seen as black boxes, by studying the case of Amazon, on several thousand products in France. We will show that the search for a prejudicial bias (for the consumer or for the platform's competitors) raises the question of finding a context of use in which the bias is statistically significant.
We will illustrate the combinatorial optimization problem that arises when the bias is ‘regional’ (neither global nor local to a decision) and when the sensitive variable (describing the discriminated population) is continuous and non-binary.
Finally, and more generally, we will describe the issues that arise when the query budget for the black-box algorithm is limited and when several auditors cooperate to detect bias more effectively.
In this presentation, a response matrix (here, species abundances) is assumed to depend on explanatory variables (here, environmental variables) supposed many and redundant, thus demanding dimension reduction. The Supervised Component-based Generalized Linear Regression (SCGLR), a Partial Least Squares-type method, is designed to extract from the explanatory variables several components jointly supervised by the set of responses. However, this methodology still has some limitations we aim to overcome in this work. The first limitation comes from the assumption that all the responses are predicted by the same explanatory space. As a second limitation, the previous works involving SCGLR assume the responses independent conditional on the explanatory variables. Again, this is not very likely in practice, especially in situations like those in ecology, where a non-negligible part of the explanatory variables could not be measured. To overcome the first limitation, we assume that the responses are partitioned into several unknown groups. We suppose that the responses in each group are predictable from an appropriate number of specific orthogonal supervised components of the explanatory variables. The second work relaxes the conditional independence assumption. A set of few latent factors models the residual covariance matrix of the responses conditional on the components. The approaches presented in this work are tested on simulation schemes, and then applied on ecology datasets.
An energy community (EC) is a legal entity involving prosumers and consumers who produce, consume, and exchange energy. The members of these communities can cooperate to maximize the community’s social welfare. In practice, this naturally raises the question of cost sharing in the community, as the members may have different contributions to social welfare. In this presentation, we empirically highlight the benefits of cooperation for the community and the individual members. Then, we present some cost-sharing mechanisms that guarantee fairness and the stability of the grand coalition composed of all prosumers and consumers. Finally, we present some results on instances built with real-world data from our partner Sween’s demonstrator, Smart Lou Quila, in South France.
When planning large projects, it is convenient to rely on a baseline schedule, which gives a preview of logistical requirements. However, project data is often uncertain and jobs may be interrupted due to fortuitous causes of unavailability. Disruptions occurring between the planning and the execution of the project may cause the baseline schedule to become unfeasible. Part of the project must then be rescheduled, possibly entailing large costs.
Anchor robustness is a two-stage robust approach that aims at computing a baseline schedule in which a subset of so-called anchored jobs have their starting times guaranteed against any disruption in a given uncertainty set. Anchor-robust scheduling, that was previously studied for processing time uncertainty, is extended to the case of uncertain unavailability periods.
Un paysage écologique peut être modélisé comme un graphe dirigé G=(V,A) dont les sommets représentent les zones d'habitat du paysage et les arcs représentent les connexions entre ces zones. Chaque sommet possède un poids indiquant la qualité écologique de la zone qu'il représente et chaque arc est associé à une longueur qui représente la difficulté pour un organisme d'effectuer le déplacement correspondant. La Probabilité de Connectivité du paysage est calculée à partir des distances de plus court chemin dans ce graphe pondéré et est souvent utilisée par les écologues pour évaluer la connectivité du paysage et identifier les zones à prioriser pour la conservation ou la restauration.
Nous nous intéressons au problème de la maximisation de la Probabilité de Connectivité d'un paysage sous contrainte budgétaire, c'est à dire la recherche de la meilleure combinaison d'options d'aménagement parmi un ensemble donné, chaque aménagement étant modélisé par une modification des pondérations du graphe.
Nous donnons une formalisation en PLNE pour ce problème et proposons une technique de prétraitement des plus courts chemins permettant de réduire significativement la taille des programmes linéaires à résoudre. Pour mettre en oeuvre ce prétraitement de manière efficace, nous donnons un algorithme en temps O(|A| + |V| log |V|) pour résoudre le problème suivant : étant donné un ensemble de scénarios caractérisés par le choix des longueurs possibles des arcs et un arc (u,v), calculer l'ensemble des sommets t tel que (u,v) est sur un plus court chemin de u à t pour tout scénario.
One of the key properties of convex problems is that every stationary point is a global optimum, and nonlinear programming algorithms that converge to local optima are thus guaranteed to find the global optimum. However, some nonconvex problems possess the same property. This observation has motivated research into generalizations of convexity. This talk proposes a new generalization which we refer to as optima-invexity: the property that only one connected set of optimal solutions exists. We state conditions for optima-invexity of unconstrained problems and discuss structures that are promising for practical use, and outline algorithmic applications of these structures.
En Europe, les circuits courts alimentaires sont reconnus comme une pierre angulaire de la transition vers un système alimentaire plus durable. De nouveaux acteurs émergent pour faciliter les relations entre producteurs et institutions comme les services de restauration. Ils opèrent souvent dans une zone restreinte, avec une connaissance précise des stocks, capacités de livraison des producteurs et besoins en produits des services de restauration. Leur rôle est d’allouer les commandes aux producteurs et de gérer leur distribution, parfois via une plateforme. L'objectif est de minimiser les coûts logistiques (distance totale parcourue) tout en maximisant la préférence des clients pour les produits les plus locaux possibles. Prendre de bonnes décisions dans ce contexte est une tâche complexe en raison de plusieurs contraintes comme les capacités de livraison, les échéances et la périssabilité des produits frais. De plus, l’attribution des commandes aux producteurs influence intrinsèquement la livraison.
Dans cette étude, nous proposons un modèle MILP pour ce problème et développons une méthode exacte basée sur la décomposition logique de Benders. Pour améliorer l’efficacité, nous intégrons des stratégies pour accélérer la convergence. Avec une étude de cas réelle (Manger Bio Centre-Val de Loire), nous évaluons notre méthode sur des instances semi-aléatoires de taille raisonnable. Les résultats montrent que notre approche surpasse largement la résolution directe du MILP avec CPLEX.
The transportation problem has recently gained renewed interest due to its applications in machine learning and artificial intelligence, particularly in measuring distances between images.
To address the associated computational challenges, we propose a novel algorithm, Iterated Inside Out, for the exact resolution of the well-known transportation problem.
The core of our algorithm requires an initial basic feasible solution and consists of two iteratively repeated phases until an optimal basic feasible solution is obtained.
In the "inside" phase, the algorithm progressively improves the current solution by increasing the value of non-basic variables with negative reduced costs, typically yielding a feasible solution that is non-basic and interior to the constraint set polytope.
In the "outside" phase, the algorithm improves the current solution by iteratively setting variables to zero until a new improved basic feasible solution is reached.
Building on this core algorithm, we develop a specialized variant tailored for image processing.
Extensive computational experiments demonstrate that our approach significantly outperforms commercial solvers such as CPLEX and Gurobi, as well as other exact algorithms in the literature.
We demonstrate its superiority on both randomly generated instances and the DOTmark benchmark, a standard dataset for image processing and large-scale transportation problems.
In this talk we consider two-stage linear adjustable robust optimization problems with continuous and fixed recourse.
These problems have been the subject of exact solution approaches, notably, constraint generation (CG) and constraint-and-column generation (CCG).
Both approaches repose on an exponential-sized reformulation of the problem which uses a large number of constraints or constraints and variables.
The decomposition algorithms then solve and reinforce a relaxation of the aforementioned reformulation through the iterations which require the solution of bilinear separation problems.
Here, we present an alternative approach reposing on a novel reformulation of the problem with an exponential number of semi-infinite constraints. We present a nested decomposition algorithm to deal with the exponential and semi-infinite natures of our formulation separately.
We argue that our algorithm will lead to a reduced number of bilinear separation problems solved while providing a high quality relaxation.
We perform a detailed numerical study that showcases the superior performance of our proposed approach compared to the state-of-the-art and evaluates the contribution of different algorithmic components.
Many combinatorial optimization (CO) problems can be formulated as resource-constrained shortest path problems (RCSPPs) on directed acyclic multigraphs, which represent state-transition graphs defined under the dynamic programming (DP) paradigm. The number of vertices, arcs, and resource constraints depends on the size of the original problem instance. This reformulation is NP-hard. Exact methods require high-quality primal bounds to converge efficiently.
In this work, we focus on designing a generic constructive heuristic algorithm that can be applied to any problem once it is formulated as an RCSPP on a directed acyclic multigraph. Recent advances have demonstrated that combining machine learning (ML) with tree-based search strategies can enhance the solution of CO problems. Building on this idea, we propose an ML-enhanced beam search algorithm for RCSPPs. Our ML model leverages graph-based information to score candidate paths for expansion.
The proposed method offers several advantages: (1) it is generic, (2) the ML model is independent of problem size, and (3) it requires no additional problem-specific information beyond the graph structure. We evaluate our approach on the single machine total tardiness problem and the temporal knapsack problem (TKP), yielding improved results compared to standard beam search without ML integration.
Pour l'analyse des séries temporelles ou pour l'étude de la convergence de certains algorithmes stochastiques tels que ceux basés sur la dynamique de Langevin, la dynamique de certains processus markoviens à temps discret est naturellement pertubée par l'introduction d'un désordre stochastique que l'on peut appeler bruit, facteur exogène ou environnement aléatoire suivant la terminologie utilisée dans les différents domaines. L'objectif de cet exposé est de montrer que l'on peut étudier la stabilité de ces chaînes de Markov en environnement aléatoire sur des espaces non bornés à partir de versions adaptées de conditions de stabilité plus classiques (condition de drift, de petit ensemble ou contraction d'applications aléatoires). Des conditions suffisantes pour l'existence de mesures stationnaires et ergodiques ainsi qu'un contrôle des coefficients de alpha-mélange pour ces processus peuvent être obtenus. On peut alors en deduire une théorie limite pour des processus autorégressifs avec covariables exogènes ainsi que des estimés qualitatifs de la vitesse de convergence de certains algorithmes stochastiques.
The vehicle routing problem with time windows (VRPTW) is a classical problem with many applications. To improve schedule accuracy, the models may consider time dependency in travel time. The real traffic data used as input to the optimizers are often obtained by location providers that predict the travel times for specified timeframes. The predictors estimate travel time based on historical data, where uncertainties from accidents, weather conditions, and working roads, may add unpredictable delays. These providers enhance their forecasts by monitoring live traffic and updating travel times throughout the day.
Since routes are typically designed in advance, updates are hard to integrate once execution starts. In this context, this work introduces the robust time-dependent VRPTW (RTDVRPTW), which uses robust optimization to handle uncertainty in travel times.
In this talk, we discuss the computational properties of RTDVRPTW under various assumptions, present exact and heuristic solution methods, and share results from real-data experiments to offer practical insights.
Bilevel optimization is a rather young but very active sub-field of mathematical optimization. The main reason is that bilevel optimization problems can serve as a powerful tool for modeling hierarchical decision-making processes, which arise in various real-world applications such as in critical infrastructure defense, transportation, or energy. However, the nested structure of bilevel problems makes them intrinsically hard to solve—even if all parameters of the problem are exactly known. Further challenges arise if problems under uncertainty are considered.
In this talk, we begin with a brief introduction to the key concepts of bilevel optimization, covering the structure, characteristics, and commonly used reformulations of these problems. We then explore how techniques from robust optimization can be used to tackle bilevel problems under uncertainty. In particular, we highlight that the sources of uncertainty in bilevel optimization are much richer compared to classic, i.e., single-level, problems since not only the problem data but also the (observation of the) decisions of the two players can be subject to uncertainty.
Finally, we discuss recent algorithmic advances for solving mixed-integer linear bilevel problems with a binary lower level and a Gamma-robust treatment of lower-level objective uncertainty. To solve these Gamma-robust bilevel problems, we present an exact and problem-tailored branch-and-cut approach as well as a heuristic framework. The latter relies on solving a linear number of deterministic bilevel problems so that no problem-specific tailoring is required.