N’afficher que les événements de cette semaine
State-Space Models (SSMs) are deterministic or stochastic dynamical systems defined by two processes. The state process, which is not observed directly, models the transformation of the system states over time, while the observation process produces the observables on which model fitting and prediction are based. Ecology frequently uses stochastic SSMs to represent the imperfectly observed dynamics of population sizes or animal movement. However, several simulation-based evaluations of model performance suggest broad identifiability issues in ecological SSMs. Formal SSM identifiability is typically investigated using exhaustive summaries, which are simplified representations of the model. The theory on exhaustive summaries is largely based on continuous-time deterministic modelling and those for discrete-time stochastic SSMs have developed by analogy. While the discreteness of time does not constitute a challenge, finding a good exhaustive summary for a stochastic SSM is more difficult. The strategy adopted so far has been to create exhaustive summaries based on a transfer function of the expectations of the stochastic process. However, this evaluation of identifiability does not allow to take into account the possible dependency between the variance parameters and the process parameters. We show that the output spectral density plays a key role in stochastic SSM identifiability assessment. This allows us to define a new suitable exhaustive summary. Using several ecological examples, we show that usual ecological models are often theoretically identifiable, suggesting that most SSM estimation problems are due to practical rather than theoretical identifiability issues.
In this presentation, we discuss recent results in the mathematical modeling of Alzheimer’s disease, based on the amyloid cascade hypothesis related to the polymerization process of the Aβ protein. We first present the analytical and numerical study of a spatial model describing the propagation of Aβ in interaction with neuronal activity. We then discuss a second model that incorporates the immune response, in particular the inflammation mediated by microglial cells. Finally, we present qualitative results related to the estimation of the nucleation rate, a critical parameter in the progression of the disease, for which no experimental measurements are currently available.
Dans ce séminaire, je parlerai de différents schémas d’optimisation stochastique, tels que la descente de gradient stochastique et le schéma de Heavy-Ball stochastique. Nous établirons des estimations d’erreur faible uniformes en temps pour l’erreur entre la solution du schéma numérique et celle d’équations différentielles continues modifiées (ou à haute résolution) aux premier et deuxième ordres, par rapport à la taille du pas de temps. Enfin, nous illustrerons ces résultats par des présentations de simulations numériques.
Aujourd'hui, l'Europe et la France en particulier sont face à des défis de très grande envergure : adaptation aux impacts du changement climatique, positionnement face aux superpuissances (USA, Chine), désindustrialisation, perte de souveraineté, d'attractivité et de compétitivité, etc. Le numérique n'échappe pas à cette situation. Alors, face aux GAFAM et aux BATX, développer des communs numériques ne serait-il pas une réponse adaptée ?
Given a measure on the real line, one can consider the corresponding orthogonal polynomials that satisfy a three-term recurrence that defines the Jacobi matrix. The classical Szegö theorem gives a condition on the measure for which these polynomials satisfy the strong asymptotics. We will discuss the generalization to the case of multiple orthogonality and show that the corresponding polynomials give rise to a Jacobi matrix on the tree. Finally, we will explain how the classical fixed-point theorems from functional analysis can be used to obtain their strong asymptotics. Based on joint work with A. Aptekarev and M. Yattselev.
Starting from the Abel-Jacobi Theorem we will motivate the interest in moduli spaces of vector bundles on algebraic varieties.
We study a robust extensible bin packing problem with budgeted uncertainty, under a budgeted uncertainty model where item sizes are defined to lie in the intersection of a box with a one-norm ball. We propose a scenario generation algorithm for this problem, which alternates between solving a master robust bin-packing problem with a finite uncertainty set and solving a separation problem. We first show that the separation is strongly NP-hard given solutions to the continuous relaxation of the master problem. Then, focusing on the separation problem for the integer master problem, we show that this problem becomes a special case of the continuous convex knapsack problem, which is known to be weakly NP-hard. Next, we prove that our special case when each of the functions is piecewise linear, having only two pieces, remains NP-hard. We develop a pseudo-polynomial dynamic program (DP) and a fully polynomial-time approximation scheme (FPTAS) for our special case whose running times match those of a binary knapsack FPTAS. Finally, our computational study shows that the DP can be significantly more efficient in practice compared with solving the problem with specially ordered set (SOS) constraints using advanced mixed-integer (MIP) solvers. Our experiments also demonstrate the application of our separation problem method to solving the robust extensible bin packing problem, including the evaluation of deferring the exact solution of the master problem, separating based on approximate master solutions in intermediate iterations. Finally, a case-study, based on real elective surgery data, demonstrates the potential advantage of our model compared with the actual schedule and optimal nominal schedules.
We define a determinant on the automorphisms of non-trivial Severi-Brauer surfaces. Using the generators and relations, we extend this determinant to birational maps between Severi-Brauer surfaces. Using this determinant and a group homomorphism found in [BSY23] we can determine the abelianisation of the Cremona group of a non-trivial Severi-Brauer surface. This is the first example of an abelianization of the Cremona group of a geometrically rational surface where the automorphism group is not trivial.
TBA
Updatable Encryption (UE) allows ciphertexts to be updated under new keys without decryption, enabling efficient key rotation. Constructing post-quantum UE with strong security guarantees is challenging: the only known CCA-secure scheme, COM-UE, uses bitwise encryption, resulting in large ciphertexts and high computational costs.
We introduce DINE, a CCA-secure, isogeny-based post-quantum UE scheme that is both compact and efficient. Each encryption, decryption, or update requires only a few power-of-2 isogeny computations in dimension 2 to encrypt 28B messages, yielding 320B ciphertexts and 224B update tokens at NIST security level 1---significantly smaller than prior constructions. Our full C implementation demonstrates practical performances: updates in 7ms, encryptions in 48ms, and decryptions in 86ms.
Our design builds on recent advances in isogeny-based cryptography, combining high-dimensional isogeny representations with the Deuring correspondence. We also introduce new algorithms for the Deuring correspondence which may be of independent interest. Moreover, the security of our scheme relies on new problems that might open interesting perspectives in isogeny-based cryptography.
preprint: https://eprint.iacr.org/2025/1853
https://www.maths-vives.fr/projet/climaths/
Dans la théorie de la géométrie de l'information développée par Amari, l'information de Fisher permet de définir une métrique riemannienne sur une famille paramétrique de distributions de probabilités donnée. Cette métrique permet de comparer et d'interpoler entre des distributions d'une même famille, et est caractérisée par son indépendance par rapport aux statistiques exhaustives. Amari introduit également une famille de connections affines avec cette même propriété. Dans cet exposé, nous parlerons de la généralisation de ces objets au cadre non paramétrique, et des structures géométriques induites sur l'espace des mesures de probabilités.
After a quick overview of the general principles of Life Cycle Assessment (LCA), we will investigate how such a tool can be helpful to compare the environmental impact of different architectures of computer systems used for teaching purposes in higher education. In particular, we will see how to perform the life cycle inventory of the systems under studies from a practical standpoint. We will then review the main results from the life cycle impact assessment and discuss them as well as the limitations of this study.
Voir ici : lien conférence
TBA
Le résumé de l'exposé de Pascal
TBA
The Boundary Control method is one of the main techniques in the theory of inverse problems. It allows to recover the metric or the potential of a wave equation in a Riemannian manifold from its Dirichlet to Neumann map (or variants) under very general geometric assumptions. In this talk we will address the issue of obtaining stability estimates for the recovery of a potential in some specific situations. As it turns out, this problem is related to the study of the blow-up of quantities coming from control theory and unique continuation. This is based on joints works with Lauri Oksanen.
TBD
Les transports évoluent vers des systèmes toujours plus connectés, capables de répondre à des enjeux majeurs tels que la sécurité, la fluidité du trafic et l’impact environnemental. Les Systèmes de Transport Intelligents Coopératifs (C-ITS) reposent sur la communication en temps réel entre véhicules, infrastructures et usagers, grâce aux avancées en réseaux, capteurs et traitement de données.
In this talk, we will focus on boundaries of multiply connected Fatou components, from a topological, measure-theoretical and dynamical point of view. The main tool in our analysis is the universal covering map (and its boundary extension), which allows us to relate the dynamics on the boundary with the dynamics of the radial extension of the so-called associated inner function. This way, we can deal with all Fatou components (invariant or wandering, with all possible internal dynamics) simultaneously.
This is joint work with G. R. Ferreira.
The multimodal nature of clinical assessment and decision-making, and the high rate of healthcare data generation, motivate the need to develop approaches specifically tailored to the analysis of these complex and potentially high-dimensional multimodal datasets. This poses both technical and conceptual challenges: how can such heterogeneous data be analyzed jointly? How can modality-specific information be identified from shared information? Variational autoencoders (VAEs) offer a robust framework for learning latent representations of complex data distributions, while being flexible enough to adapt to different data types and structures, and having already been successfully applied for latent disentanglement of multimodal (multi-channel) data. We aim at tackling multi-channel disentanglement from a causal perspective, and seek at identifying causal relationships between channels, beyond simple statistical associations. To do that, we propose Multi-Channel Causal VAE (MC²VAE), a novel causal disentanglement approach for multi-channel data, whose objective is to jointly learn modality-specific latent representations from a multi-channel dataset, and identify a causal structure between the latent channels. Each channel is projected into its own latent space, where a causal discovery step is integrated to learn the hidden causal graph. Finally, the decoder takes into account the discovered graph to predict the data. Covariate of interest can be integrated as well when available, and accounted in the causal graph structure. Extensive experiments on synthetically generated multi-channel datasets demonstrate the ability of MC²VAE in effectively uncovering the underlying latent causal structures across multiple channels, hence making it a strong candidate for real-world multi-channel causal disentanglement. Application to multi-channel data on neurodegeneration extracted from the Alzheimer’s Disease Neuroimaging Initiative highlights the existence of a biologically meaningful latent causal structure, whose pertinence is supported by multiple previous experimental and modelling work, and provides actionable insight for disease progression.
...
This talk presents computational and theoretical advances/experiments in Mixed Integer Nonlinear Programming across two complementary themes. The first focuses on emerging MINLP techniques — online learning for pseudo cost estimation, ReLU-based neural methods for cut separation, and AlphaEvolve-style modelling — that aim to modernize the MINLP solver. The second focuses on aggregation-based cutting planes, highlighting the practical importance of Complemented Mixed Integer Rounding (CMIR) cuts in modern MILP solvers. A sparsity-driven aggregation framework is introduced that models aggregation as an MILP and a two-stage LP heuristic that produces sparse, strong aggregated rows with measurable gains on MIPLIB2017. Theoretical results show CMIR cuts frequently define faces (and empirically facets) of the convex hull; Fenchel-style normalization is proposed to strengthen them. Finally, we give a prospect on the MINLP solving.
I will first discuss a problem of high-dimensional probability known as the ellipsoid fitting conjecture. I will present recent progress on this conjecture using both non-rigorous analytical tools of statistical physics, and rigorous methods based on the combination of universality results in statistics and extensions of classical approaches in random convex geometry. In a second part, I will discuss how the techniques developed to analyze ellipsoid fitting can be used to sharply characterize optimal learning in a wide neural network with a quadratic activation function, as well as in a model of learning from long sequences of high-dimensional tokens.
Nous discuterons le résultat suivant. Supposons que l’on dispose de deux familles d’applications de Hénon $(f_t)_t$ et $(g_t)_t$, paramétrées par une courbe algébrique, définies sur un corps de nombres, et que l’une d’entre elles soit dissipative. Alors il existe une constante positive $C$ et deux entiers strictement positive $N$ et $M$ telles que, pour tout paramètre $t$, soit le nombre de points périodiques communs à $f_t$ et $g_t$ est inférieur à $C$, soit, $f_t^N = g_t^M$. C'est un travail en cours, en collaboration avec Marc Abboud.
TBA
The half-line Dirac operators with $L^2$-potentials can be characterized by their spectral data. It is known that the spectral correspondence is a homeomorphism: close potentials give rise to close spectral data and vice versa. We prove the first explicit two-sided uniform estimate related to this continuity in the general $L^2$-case. The proof is based on an exact solution of the inverse spectral problem for Dirac operators with $\delta$-interactions on a half-lattice in terms of the Schur's algorithm for analytic functions. The new approach rises interesting questions that I will also discuss in the talk. Joint work with Pavel Gubkin.
On lance un dé et la personne choisie improvise un exposé en 15 minutes.
Suivi d'un repas d'équipe à la passerelle.
Lien bientot disponible
We will examine stochastic implicit algorithms. These algorithms have proven more robust with respect to step size selection compared to their explicit counterparts. Specifically, we will show that variance reduction techniques enable to improve the convergence rates of these implicit algorithms as already seen for explicit ones
Séminaire commun avec Optimal
À préciser
TBA
A définir
À préciser
à définir
A définir
TBA