Sorry, you need to enable JavaScript to visit this website.
Share

Publications

Publications

CMAP Theses  are available by following this link:
Discover CMAP theses

Listed below, are sorted by year, the publications appearing in the HAL open archive.

2026

  • A characterization of Generalized functions of Bounded Deformation
    • Chambolle Antonin
    • Crismale Vito
    Journal of Functional Analysis, Elsevier, 2026, 290 (9), pp.111391. <div><p>We show that Dal Maso's GBD space, introduced for tackling crack growth in linearized elasticity, can be defined by simple conditions in a finite number of directions of slicing.</p></div> (10.1016/j.jfa.2026.111391)
    DOI : 10.1016/j.jfa.2026.111391
  • Autoregressive Multiplier Bootstrap for In-situ Error Estimation and Quality Monitoring of Finite Time Averages in Turbulent Flow Simulations
    • Papagiannis Christos
    • Balarac Guillaume
    • Congedo Pietro Marco
    • Le Maître Olivier P
    Computer Methods in Applied Mechanics and Engineering, Elsevier, 2026, 452, pp.118664. In Computational Fluid Dynamics (CFD), and particularly within Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES), the computational cost is largely dictated by the effort required to obtain statistically converged quantities such as time-averaged fields and higher-order moments. Despite the importance of accurately quantifying statistical uncertainty in unsteady simulations, no continuous and cost-effective, on-line method currently exists for monitoring the convergence quality of such statistics during runtime. This work introduces a novel, fully on-line bootstrapping approach to estimate the variance of finite-time averages without requiring the estimation of the flows Auto-Correlation Function (ACF). Unlike existing methods that rely on ACF estimation, which are often impractical due to excessive storage demands in large-scale simulations, or require off-line processing or a priori modeling assumptions, our method operates entirely during the simulation and incurs minimal overhead. The proposed technique employs a recursive update of bootstrap replicates of the time average, using correlated random weights generated via an autoregressive model. This formulation is computationally efficient: the update cost scales linearly with the number of bootstrap replicates and the dimensionality of the flow field, and the autoregressive model is inexpensive to evaluate. The method only requires storage of a small number of fields, making it suitable for large-scale CFD applications. We demonstrate the effectiveness of the approach on synthetic data from the Ornstein-Uhlenbeck process and on two canonical LES cases: a turbulent pipe flow and a round jet. We further discuss the methods applicability to simulations with non-uniform time stepping, highlighting its flexibility and robustness. (10.1016/j.cma.2025.118664)
    DOI : 10.1016/j.cma.2025.118664
  • Nonnegativity certificates for finite semi-algebraic sets
    • Bender Matías R
    • Tsigaridas Elias
    • Zenkovich Alexander
    , 2026. <div><p>We introduce new certificates for nonnegativity of multivariate polynomials with rational coefficients over zero-dimensional semi-algebraic sets. They are perfectly complete, certifying every nonnegative polynomial, and perfectly sound, correctly identifying negativity. We rely on resultants computations and Rational Univariate Representations (RUR) and make no assumptions on the input.</p><p>For the univariate case, we introduce a perturbation technique that avoids root approximation and does not alter the (bit)size of the input. For multivariate polynomials, we make a reduction to the univariate case using RUR.</p><p>For the dense case, we compute a certificate in OB(d 4n+3 (d + τ )) bit operations; it involves numbers of bitsize O(d 3n+2 (d + τ )), where n is the number of variables, d the degree, and τ the maximum coefficient bitsize of the polynomials. For the sparse case, we provide the first sparse certificate based on the Newton polytope Q of the input polynomials. We compute in OB(vol(Q) 8 (n!) 8 2 5n+3 (n + τ )),</p><p>For semi-algebraic sets with s inequalities, we present two approaches. The first performs a reduction to the algebraic case and has complexity OB(2 (3ω+3)s d 5n τ ); it is purely algebraic approach and does not require root approximation. The second exploits approximate Lagrange interpolation and matches the O(sd 4n τ ) bitsize bounds of recent work by Baldi, Krick, and Mourrain [2] while improving complexity by orders of magnitude and removing all structural assumptions on the input.</p><p>Additionally, we provide a witness of negativity, ensuring that we either obtain a certificate or it does not exist.</p></div>
  • Modeling the risks within the protocol Aave, with an application to portfolio allocation
    • Gobet Emmanuel
    • Latournerie Louis
    , 2026. Decentralized Finance (DeFi) lending and borrowing protocols enable investors to take leveraged long and short positions on digital assets without centralized intermediaries, but expose them to a distinctive form of risk: on-chain liquidation triggered by debt and collateral value fluctuations. In this work, we provide a detailed formalization of Aave's lending, borrowing, and liquidation mechanisms, grounded in the protocol's open-source implementation. In doing so, we propose a mathematical modeling of the risk of liquidation, including some stochastic approximations with the purpose of efficient analysis, with different applications. Among them, portfolio optimization problem.
  • Sampling from multi-modal distributions on Riemannian manifolds with training-free stochastic interpolants
    • Durmus Alain
    • Noble Maxence
    • Pellerin Thibaut
    , 2026. In this paper, we propose a general methodology for sampling from un-normalized densities defined on Riemannian manifolds, with a particular focus on multi-modal targets that remain challenging for existing sampling methods. Inspired by the framework of diffusion models developed for generative modeling, we introduce a sampling algorithm based on the simulation of a non-equilibrium deterministic dynamics that transports an easy-to-sample noise distribution toward the target. At the marginal level, the induced density path follows a prescribed stochastic interpolant between the noise and target distributions, specifically constructed to respect the underlying Riemannian geometry. In contrast to related generative modeling approaches that rely on machine learning, our method is entirely training-free. It instead builds on iterative posterior sampling procedures using only standard Monte Carlo techniques, thereby extending recent diffusion-based sampling methodologies beyond the Euclidean setting. We complement our approach with a rigorous theoretical analysis and demonstrate its effectiveness on a range of multi-modal sampling problems, including high-dimensional and heavy-tailed examples.
  • Diffusion-based Annealed Boltzmann Generators : benefits, pitfalls and hopes
    • Grenioux Louis
    • Noble Maxence
    , 2026. Sampling configurations at thermodynamic equilibrium is a central challenge in statistical physics. Boltzmann Generators (BGs) tackle it by combining a generative model with a Monte Carlo (MC) correction step to obtain asymptotically unbiased samples from an unnormalized target. Most current BGs use classic MC mechanisms such as importance sampling, which both require tractable likelihoods from the backbone model and scale poorly in high-dimensional, multi-modal targets. We study BGs built on annealed Monte Carlo (aMC), which is designed to overcome these limitations by bridging a simple reference to the target through a sequence of intermediate densities. Diffusion models (DMs) are powerful generative models and have already been incorporated into aMC-based recalibration schemes via the diffusion-induced density path, making them appealing backbones for aMC-BGs. We provide an empirical meta-analysis of DM-based aMC-BGs on controlled multi-modal Gaussian mixtures (varying mode separation, number of modes, and dimension), explicitly disentangling inference effects from learning effects by comparing (i) a perfectly learned DM and (ii) a DM trained from data. Even with a perfect DM, standard integrations using only first-order stochastic denoising kernels fail systematically, whereas second-order denoising kernels can substantially improve performance when covariance information is available. We further propose a deterministic aMC integration based on first-order transport maps derived from DMs, which outperforms the stochastic first-order variant at higher computational cost. Finally, in the learned-DM setting, all DM-aMC variants struggle to produce accurate BGs; we trace the main bottleneck to inaccurate DM log-density estimation.
  • Fast, faithful and photorealistic diffusion-based image super-resolution with enhanced Flow Map models
    • Noble Maxence
    • Quintana Gonzalo Iñaki
    • Aubin Benjamin
    • Chadebec Clément
    , 2026. Diffusion-based image super-resolution (SR) has recently attracted significant attention by leveraging the expressive power of large pre-trained text-to-image diffusion models (DMs). A central practical challenge is resolving the trade-off between reconstruction faithfulness and photorealism. To address inference efficiency, many recent works have explored knowledge distillation strategies specifically tailored to SR, enabling one-step diffusion-based approaches. However, these teacher-student formulations are inherently constrained by information compression, which can degrade perceptual cues such as lifelike textures and depth of field, even with high overall perceptual quality. In parallel, self-distillation DMs, known as Flow Map models, have emerged as a promising alternative for image generation tasks, enabling fast inference while preserving the expressivity and training stability of standard DMs. Building on these developments, we propose FlowMapSR, a novel diffusion-based framework for image super-resolution explicitly designed for efficient inference. Beyond adapting Flow Map models to SR, we introduce two complementary enhancements: (i) positive-negative prompting guidance, based on a generalization of classifier free-guidance paradigm to Flow Map models, and (ii) adversarial fine-tuning using Low-Rank Adaptation (LoRA). Among the considered Flow Map formulations (Eulerian, Lagrangian, and Shortcut), we find that the Shortcut variant consistently achieves the best performance when combined with these enhancements. Extensive experiments show that FlowMapSR achieves a better balance between reconstruction faithfulness and photorealism than recent state-of-the-art methods for both x4 and x8 upscaling, while maintaining competitive inference time. Notably, a single model is used for both upscaling factors, without any scale-specific conditioning or degradation-guided mechanisms.
  • Sparse recovery of Diffusion Dynamics: Handling High-Dimensionality in Repeated Short Trajectories
    • Bayraktar Elise
    • Dion-Blanc Charlotte
    , 2026. Viewed as systems of interacting particles, high-dimensional stochastic differential equations encode complex interaction structures within their drift component. We propose a novel approach to estimate this drift from independent high-frequency trajectory data observed over a short time horizon. Each trajectory is modelled as the solution of a Brownian-driven stochastic differential equation, while the number of time points within each path tends to infinity. We further assume that the drift function governing the dynamics can be expressed as a linear combination of a growing number of Lipschitz basis functions. To promote accurate recovery of the underlying dynamics under sparsity constraints, we propose a Lasso-regularised likelihood criterion. Under suitable regularity conditions, we establish convergence rates for the resulting estimator and emphasise how they depend on the dimensional parameters of the problem, in particular on the number of observed trajectories. We assess the performance of the estimator on synthetic datasets, both from an estimation and a generative perspective. Finally, we illustrate the practical relevance of the approach on a real-world climate dataset, highlighting its ability to perform variable selection.
  • Robust a posteriori estimation of probit-lognormal seismic fragility curves via sequential design of experiments and constrained reference prior
    • Van Biesbroeck Antoine
    • Gauchy Clément
    • Feau Cyril
    • Garnier Josselin
    Nuclear Engineering and Design, Elsevier, 2026, 448, pp.114695. A seismic fragility curve expresses the probability of failure of a structure conditional to an intensity measure (IM) derived from seismic signals. When only limited data is available, the practitioner often refers to the probit-lognormal model coupled with maximum likelihood estimation (MLE) to obtain estimates of these curves. This means that only a binary indicator of the state (BIS) of the structure is known, namely a failure or non-failure state indicator, when it is subjected to a seismic signal with an intensity measure IM. In this context, the objective of this work is to propose a method for optimally estimating such curves by obtaining the most precise estimate possible with the minimum of data. The novelty of our work is twofold. First, we present and show how to mitigate the likelihood degeneracy problem which is ubiquitous with small data sets and hampers frequentist approaches such as MLE. Second, we propose a novel strategy for sequential design of experiments (DoE) that selects seismic signals from a large database of synthetic or real signals via their IM values, to be applied to structures to evaluate the corresponding BISs. This strategy relies on a criterion based on information theory in a Bayesian framework. It therefore aims to sequentially designate the IM value such that the pair (IM, BIS) has on average, with respect to the BIS of the structure, the greatest impact on the posterior distribution of the fragility curve. The methodology is applied to a case study from the nuclear industry. The results demonstrate its ability to efficiently and robustly estimate the fragility curve, and to avoid degeneracy even with a limited amount of data, i.e., less than 100. Furthermore, we demonstrate that the estimates quickly reach the model bias induced by the probit-lognormal modeling. Eventually, two criteria are suggested to help the user stop the DoE algorithm. (10.1016/j.nucengdes.2025.114695)
    DOI : 10.1016/j.nucengdes.2025.114695
  • Bigraded Castelnuovo-Mumford regularity and Groebner bases
    • Bender Matías R
    • Busé Laurent
    • Checa Carles
    • Tsigaridas Elias
    Journal of Symbolic Computation, Elsevier, 2026, 133, pp.26. We study the relation between the bigraded Castelnuovo-Mumford regularity of a bihomogeneous ideal $I$ in the coordinate ring of the product of two projective spaces and the bidegrees of a Groebner basis of $I$ with respect to the degree reverse lexicographical monomial order in generic coordinates. For the single-graded case, Bayer and Stillman unraveled all aspects of this relationship forty years ago and these results led to complexity estimates for computations with Groebner bases. We build on this work to introduce a bounding region of the bidegrees of minimal generators of bihomogeneous Groebner bases for $I$. We also use this region to certify the presence of some minimal generators close to its boundary. Finally, we show that, up to a certain shift, this region is related to the bigraded Castelnuovo-Mumford regularity of $I$. (10.1016/j.jsc.2025.102487)
    DOI : 10.1016/j.jsc.2025.102487
  • Asymptotic behavior of some stochastic models in population dynamics: a Hamilton-Jacobi approach
    • Jeddi Anouar
    , 2026. In this paper, we investigate the asymptotic behavior of individual-based models describing the evolution of a population structured by a real trait, subject to selection and mutation. We consider two different sets of assumptions: first, the case of critical or subcritical branching population processes in a regime combining a discretization of the trait space, small mutations, large time and large initial population size, where we are able to characterize using a Hamilton-Jacobi approach, the survival set of the population, and the asymptotic of the logarithmic scaling of subpopulation sizes. Second, we generalize by a direct method the convergence to the classical Hamilton-Jacobi equation obtained in the super-critical branching regime considered in [6] to a more general trait space and under weaker assumptions. Moreover, we establish that the stochastic and the deterministic dynamics are asymptotically equivalent in large population.
  • Validation du simulateur ICI de propagation d'épidémies à l'aide des données publiques recueillies pendant la première vague de la pandémie de COVID-19
    • Colomb Maxime
    • Talay Denis
    • Carneiro Viana Aline
    • Cormier Quentin
    • Garnier Josselin
    • Gilet Nicolas
    • Graham Carl
    • Grigori Laura
    • Perret Julien
    • Porcher Raphael
    • Ravaud Philippe
    • Stanica Razvan
    • Tomasevic Milica
    • Tran Viet-Thi
    , 2026. Ce rapport a pour premier objectif de présenter le simulateur ICI de propagation d'épidémies fondé sur des jumeaux numériques de territoires géographiques, de populations synthétiques statistiquement conformes aux populations réelles, et à la simulation numérique des agendas horaires et des interactions sociales des individus, ainsi que des contaminations entre individus. Par méthode de Monte-Carlo ICI fournit des informations statistiques précises, différenciées par zones géographiques et par catégories de population, sur les évolutions d'épidémies. Ces informations permettent de comparer quantitativement les impacts attendus de politiques sanitaires variées. Le second objectif est de dresser le bilan de nos tests sur la capacité d’ICI à reproduire quantitativement la dynamique épidémiologique et hospitalière observée lors de la première vague de Covid-19 à Paris, et d'analyser les capacités et les limites d'ICI pour des analyses contrefactuelles. Nos résultats montrent que d'ores et déjà ICI est un outil numérique opérationnel d'aide à la décision préalable aux interventions publiques contre les épidémies futures, prêt à être déployé sur des territoires multiples et pour des types variés d'épidémies.
  • Learning with Locally Private Examples by Inverse Weierstrass Private Stochastic Gradient Descent
    • Dufraiche Jean
    • Mangold Paul
    • Perrot Michaël
    • Tommasi Marc
    , 2026. <div><p>Releasing data once and for all under noninteractive Local Differential Privacy (LDP) enables complete data reusability, but the resulting noise may create bias in subsequent analyses. In this work, we leverage the Weierstrass transform to characterize this bias in binary classification. We prove that inverting this transform leads to a biascorrection method to compute unbiased estimates of nonlinear functions on examples released under LDP. We then build a novel stochastic gradient descent algorithm called Inverse Weierstrass Private SGD (IWP-SGD). It converges to the true population risk minimizer at a rate of O(1/n), with n the number of examples. We empirically validate IWP-SGD on binary classification tasks using synthetic and real-world datasets.</p></div>
  • Hematopoiesis as a continuum: from stochastic compartmental model to hydrodynamic limit
    • Bansaye Vincent
    • Fernández Baranda Ana
    • Giraudier Stéphane
    • Méléard Sylvie
    , 2026. We consider a multiscale stochastic compartmental model with three types of cells (stem cells, immature cells and mature cells) which combines cell proliferation and cell differentiation. We derive a hydrodynamic limit when the number of immature compartments goes to infinity obtaining a partial differential equations system with boundary conditions, modelling hematopoiesis as a continuum. We assume that proliferation and differentiation are regulated and let the corresponding rates depend on the number of mature cells. This leads us to model the dynamics of the population by a Markov process in continuous time and discrete space, which does not satisfy the branching property. We prove the convergence in law of the stem and mature cells population size processes and of the empirical measures of the immature cells dynamics, conveniently rescaled, to the unique triplet involving coupled functions and a measure, which are solutions of a deterministic measure valued equation with boundary dynamics. The cell differentiation induces a transport term in space and the main difficulty comes from the boundary effects coming from stem and mature cells. We also prove that the limiting measure admits at each time a density with respect to Lebesgue measure and can be characterized as solution of a partial differential equation.
  • Empirical distribution of ancestral lineages in populations with density-dependent interactions
    • Kubasch Madeleine
    , 2025. We study a density-dependent Markov jump process describing a population where each individual is characterized by a type, and reproduces at rates depending both on its type and on the population type distribution. We are interested in the empirical distribution of ancestral lineages in the population process. First, we exhibit a time-inhomogeneous Markov process, which allows to capture the behavior of a sampled lineage in the population process. This is achieved through a many-to-one formula, which relates the expected value of a functional evaluated over the lineages in the population process to the expectation of the functional evaluated along this time-inhomogeneous process. This provides a direct interpretation of the underlying survivorship bias, as illustrated on a minimalistic population process. Second, we consider the large population regime, when the population size grows to infinity. Under classical assumptions, the population type distribution converges to a deterministic limit. Here, we focus on the empirical distribution of ancestral lineages in this large population limit, for which we establish a many-to-one formula. Using coupling arguments, we further quantify the approximation error which arises when sampling in this large population approximation instead of the finite-size population process.
  • Quantitative approximation of a Keller–Segel PDE by a branching moderately interacting particle system and suppression of blow-up
    • Cavallazzi Thomas
    • Richard Alexandre
    • Tomasevic Milica
    , 2026. The Keller–Segel PDE is a model for chemotaxis known to exhibit possible finite-time blow- up. Following a seminal work by Tello and Winkler [43], a logistic damping term is added in this PDE and local well-posedness of mild solutions is proven. When the space dimension is 2 or when the damping is strong enough, the solution is global in time. In the second part of this work, a microscopic description of this model is introduced in terms of a system of stochastic moderately interacting particles. This system features two main characteristics: the interaction between particles happens through a singular (Coulomb-type) kernel which is attractive; and the particles are subject to demographic events, birth and death due to local competition with other particles. The latter induces a branching structure of the particle system. Then the main result of this work is the convergence of the empirical measure of the particle system towards the Keller–Segel PDE with logistic damping, with a rate of order N − 1 2(d+1) .
  • Stochastic invariance in infinite dimension beyond Lipschitz coefficients
    • Abi Jaber Eduardo
    • Tappe Stefan
    , 2026. We establish necessary and sufficient conditions for stochastic invariance of closed subsets in Hilbert spaces for solutions to infinite-dimensional stochastic differential equations (SDEs) under mild assumptions on the coefficients. Our first characterization is formulated in terms of certain normal vectors to the invariance set and requires differentiability only of the dispersion operator, but not of the diffusion coefficient itself. The condition involves a suitable corrected drift expressed through the dispersion operator and its Moore-Penrose pseudoinverse, extending the classical Stratonovich correction term to the present low-regularity setting. Our second characterization is given in terms of the positive maximum principle for the infinitesimal generator of the associated diffusion process. We illustrate our characterizations in the case of invariant manifolds.
  • Certified Per-Instance Unlearning Using Individual Sensitivity Bounds
    • Benarroch Hanna
    • Atif Jamal
    • Cappé Olivier
    , 2026. Certified machine unlearning can be achieved via noise injection leading to differential privacy guarantees, where noise is calibrated to worst-case sensitivity. Such conservative calibration often results in performance degradation, limiting practical applicability. In this work, we investigate an alternative approach based on adaptive per-instance noise calibration tailored to the individual contribution of each data point to the learned solution. This raises the following challenge: how can one establish formal unlearning guarantees when the mechanism depends on the specific point to be removed? To define individual data point sensitivities in noisy gradient dynamics, we consider the use of per-instance differential privacy. For ridge regression trained via Langevin dynamics, we derive high-probability per-instance sensitivity bounds, yielding certified unlearning with substantially less noise injection. We corroborate our theoretical findings through experiments in linear settings and provide further empirical evidence on the relevance of the approach in deep learning settings.
  • Unbiased Approximate Vector-Jacobian Products for Efficient Backpropagation
    • Bakong Killian
    • Massoulié Laurent
    • Oyallon Edouard
    • Scaman Kevin
    , 2026. In this work we introduce methods to reduce the computational and memory costs of training deep neural networks. Our approach consists in replacing exact vector-jacobian products by randomized, unbiased approximations thereof during backpropagation. We provide a theoretical analysis of the trade-off between the number of epochs needed to achieve a target precision and the cost reduction for each epoch. We then identify specific unbiased estimates of vector-jacobian products for which we establish desirable optimality properties of minimal variance under sparsity constraints. Finally we provide in-depth experiments on multi-layer perceptrons, BagNets and Visual Transfomers architectures. These validate our theoretical results, and confirm the potential of our proposed unbiased randomized backpropagation approach for reducing the cost of deep learning.
  • Generalized Leverage Score for Scalable Assessment of Privacy Vulnerability
    • Dorseuil Valentin
    • Atif Jamal
    • Cappé Olivier
    , 2026. Can the privacy vulnerability of individual data points be assessed without retraining models or explicitly simulating attacks? We answer affirmatively by showing that exposure to membership inference attack (MIA) is fundamentally governed by a data point’s influence on the learned model. We formalize this in the linear setting by establishing a theoretical correspondence between individual MIA risk and the leverage score, identifying it as a principled metric for vulnerability. This characterization explains how data-dependent sensitivity translates into exposure, without the computational burden of training shadow models. Building on this, we propose a computationally efficient generalization of the leverage score for deep learning. Empirical evaluations confirm a strong correlation between the proposed score and MIA success, validating this metric as a practical surrogate for individual privacy risk assessment.
  • Riemannian Stochastic Interpolants for Amorphous Particle Systems
    • Grenioux Louis
    • Galliano Leonardo
    • Berthier Ludovic
    • Biroli Giulio
    • Gabrié Marylou
    , 2025. Modern generative models hold great promise for accelerating diverse tasks involving the simulation of physical systems, but they must be adapted to the specific constraints of each domain. Significant progress has been made for biomolecules and crystalline materials. Here, we address amorphous materials (glasses), which are disordered particle systems lacking atomic periodicity. Sampling equilibrium configurations of glass-forming materials is a notoriously slow and difficult task. This obstacle could be overcome by developing a generative framework capable of producing equilibrium configurations with well-defined likelihoods. In this work, we address this challenge by leveraging an equivariant Riemannian stochastic interpolation framework which combines Riemannian stochastic interpolant and equivariant flow matching. Our method rigorously incorporates periodic boundary conditions and the symmetries of multi-component particle systems, adapting an equivariant graph neural network to operate directly on the torus. Our numerical experiments on model amorphous systems demonstrate that enforcing geometric and symmetry constraints significantly improves generative performance. (10.48550/arXiv.2512.16607)
    DOI : 10.48550/arXiv.2512.16607
  • A robust computational framework for the mixture-energy-consistent six-equation two-phase model with instantaneous mechanical relaxation terms
    • Orlando Giuseppe
    • Haegeman Ward
    • Pelanti Marica
    • Massot Marc
    , 2026. We present a robust computational framework for the numerical solution of a hyperbolic 6-equation single-velocity two-phase system. The system's main interest is that, when combined with instantaneous mechanical relaxation, it recovers the solution of the 5-equation model of Kapila. Several numerical methods based on this strategy have been developed over the years. However, neither the 5- nor 6-equation model admits a complete set of jump conditions because they involve non-conservative products. Different discretizations of these terms in the 6-equation model exist. The precise impact of these discretizations on the numerical solutions of the 5-equation model, in particular for shocks, is still an open question to which this work provides new insights. We consider the phasic total energies as prognostic variables to naturally enforce discrete conservation of total energy and compare the accuracy and robustness of different discretizations for the hyperbolic operator. Namely, we discuss the construction of an HLLC approximate Riemann solver in relation to jump conditions. We then compare an HLLC wave-propagation scheme which includes the non-conservative terms, with Rusanov and HLLC solvers for the conservative part in combination with suitable approaches for the non-conservative terms. We show that some approaches for the discretization of non-conservative terms fit within the framework of path-conservative schemes for hyperbolic problems. We then analyze the use of various numerical strategies on several relevant test cases, showing both the impact of the theoretical shortcomings of the models as well as the importance of the choice of a robust framework for the global numerical strategy.
  • Non-Exchangeable Mean Field Markov Decision Processes with common noise : from Bellman equation to quantitative propagation of chaos
    • Mekkaoui Samy
    • Pham Huyên
    , 2026. <div><p>We study infinite-horizon Markov Decision Processes (MDPs) with a continuum of heterogeneous agents interacting through a common noise, without assuming exchangeability. We introduce the framework of Conditional Non-Exchangeable Mean Field MDPs (CNEMF-MDPs) in both a strong formulation and a label-state formulation. We establish the equivalence between these two formulations by showing that the control problem can be lifted to a standard MDP defined on the Wasserstein space P λ (I ×X ), where I denotes the label (heterogeneity) space, X is the individual state space, and λ specifies the fixed distribution of agent labels. Within this framework, we characterize the value function as the unique fixed point of an appropriate Bellman operator acting on P λ (I × X ).</p><p>Our second contribution is a quantitative analysis of the propagation of chaos for this non-exchangeable setting with common noise. We derive sharp finite-population bounds by comparing the Bellman operator of the finite N -agent MDP, defined on the high-dimensional space X N , with its infinite-agent counterpart. This comparison yields explicit constructions of near-optimal policies for the N -agent system from -optimal policies of the limiting CNEMF-MDP.</p></div>
  • Two-Temperature and Thermal Plasma Kinetic Theories
    • Giovangigli Vincent
    , 2026. We first review a two-temperature kinetic theory of multicomponent magnetized reactive plasmas where electrons and heavy species have their own temperature. The Knudsen number is taken to be proportional to the square root of the mass ratio and polyatomic species are taken into account.</p><p>We then review the one-temperature kinetic theory of multicomponent magnetized reactive plasmas when the mass ratio remains of order unity. The complex tensorial structure of the transport fluxes is addressed as well as the symmetry properties of the multicomponent transport coefficients. We then establish new links between these two theories by using the two-temperature scaling in the transport linear system obtained from the one-temperature kinetic theory. The flux structure of the two-temperature theory is recovered from the equilibrium theory as well as the second order corrector terms. We also address the solution of transport linear systems by using fast and convergent iterative algorithms and their improvement for ionized mixtures.
  • Entropic Mirror Monte Carlo
    • Cherradi Anas
    • Janati Yazid
    • Durmus Alain
    • Le Corff Sylvain
    • Petetin Yohan
    • Stoehr Julien
    , 2026. Importance sampling is a Monte Carlo method which designs estimators of expectations under a target distribution using weighted samples from a proposal distribution. When the target distribution is complex, such as multimodal distributions in highdimensional spaces, the efficiency of importance sampling critically depends on the choice of the proposal distribution. In this paper, we propose a novel adaptive scheme for the construction of efficient proposal distributions. Our algorithm promotes efficient exploration of the target distribution by combining global sampling mechanisms with a delayed weighting procedure. The proposed weighting mechanism plays a key role by enabling rapid resampling in regions where the proposal distribution is poorly adapted to the target. Our sampling algorithm is shown to be geometrically convergent under mild assumptions and is illustrated through various numerical experiments.