Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2025

  • Multi-source domain adaptation for learning on biosignals
    • Gnassounou Theo
    , 2025. The success of modern machine learning models often relies on large-scale labeled datasets. However, in specialized fields like neuroscience, data is frequently scarce due to privacy concerns, leading to low variability in the dataset of training. This results in a distribution shift where models trained on a source dataset perform poorly on a different target dataset. Domain Adaptation (DA) aims to solve this by adapting models to new target domains with different data distributions without access to target labels. Among the various DA techniques, Optimal Transport (OT) has emerged as a powerful and principled tool for aligning distributions between domains. This thesis specifically addresses the challenges of applying DA to Electroencephalography (EEG) data. EEG signals exhibit high variability across subjects and sessions, a significant source of distribution shift, which is further compounded by limited data availability. Despite its potential, the broader application of DA is hampered by a lack of accessible software and reproducible benchmarks with realistic validation protocols. Moreover, existing DA methods are often tailored for computer vision, making them less effective for other domains like EEG analysis. This thesis addresses these challenges by improving the accessibility and reproducibility of DA methods. We introduce novel techniques for aligning EEG signals and propose a robust normalization layer for deep learning models. This thesis is organized as follows: Chapter 1 provides an overview of domain adaptation, reviewing distribution shifts and the methods designed to mitigate them, including modern deep learning approaches. It then details Optimal Transport (OT) theory and its applications in aligning distributions with introduction of specific case when data are supposed gaussian. Finally, it connects these concepts to the analysis of brain signals (EEG), where high variability presents a significant distribution shift challenge. Chapter 2 introduces Skada, a Python toolbox for simplifying the implementation and evaluation of domain adaptation methods. It also presents SKADA-bench, a comprehensive benchmark using a rigorous cross-validation protocol to assess DA methods across various data modalities. The results highlight the challenges of hyperparameter tuning and the effectiveness of parameter-free methods. Chapter 3 proposes Spatio-Temporal Monge Alignment (STMA), a novel method for multi-source and test-time adaptation in EEG data. STMA aligns the Power Spectral Density (PSD) of signals to a common barycenter, effectively normalizing the data. Extensive experiments on BCI and sleep staging datasets demonstrate that STMA significantly improves model performance and generalization. Chapter 4 presents PSDNorm, a test-time temporal normalization method for deep learning in sleep staging. PSDNorm is integrated as a normalization layer within a neural network to align the PSD during feature extraction. Validated on 10 diverse datasets comprising more than 10.000 subjects, PSDNorm is shown to significantly outperform traditional normalization techniques. Finally, Chapter 5 summarizes the contributions of this thesis and outlines future research directions, including the evolution of the Skada library, expanding domain adaptation benchmarks, and generalizing the Monge Alignment framework to other data modalities.
  • Computer-assisted methods for diffusion models in population dynamics : cross-diffusion, non-local reactions, and stability
    • Payan Maxime
    , 2025. The analysis of nonlinear cross-diffusion systems is a major challenge for understanding complex physical or biological phenomena. Computer-assisted methods provide powerful tools to deliver precise and rigorous answers to theoretical questions such as the existence and stability of steady states, or the search for eigenvalues in concrete situations.This thesis proposes the application and adaptation of these methods in the context of cross-diffusion and nonlocal or nonhomogeneous reaction-diffusion models. A first study concerns a chemotaxis model incorporating a nonlinearity in the diffusion term. We develop a specific tool to handle this nonlinearity, allowing us to establish the existence of multiple steady states using a fixed point theorem whose assumptions are verified by computer.A second study focuses on a nonlocal reaction-diffusion model, where we study the stability of a steady state as a function of the intensity of nonlocality. This study is a first step towards the analysis of stability for cross-diffusion systems. We propose a computer-assisted approach for the study of the spectrum of a compact resolvent operator, using a Gershgorin theorem for infinite matrices. We then analyze the dependence of the first eigenvalue with respect to the nonlocality parameter in order to determine the existence and uniqueness of a stability threshold.Finally, a last study on nonlinear models, which include cross-diffusion models, allows us to propose a general methodology to study the existence and stability of steady states. This methodology is inspired by an analogy with the Lyapunov stability theorem for systems of ordinary differential equations. We build a positive definite self-adjoint operator, whose properties are verified by a computer, thus obtaining a quadratic Lyapunov functional.
  • Fully explicit numerical scheme for linearized wave propagation in nearly-incompressible soft hyperelastic solids
    • Merlini Giulia
    • Allain Jean-Marc
    • Imperiale Sébastien
    Wave Motion, Elsevier, 2025, 139, pp.103594. The numerical approximation of wave propagation problems in nearly or pure incompressible solids faces several challenges such as locking and stability constraints. In this work we propose a stabilized Leapfrog scheme based on the use of Chebyshev polynomials to relax the stability condition, which is strongly limited by the enforcement of incompressibility. The scheme is fully explicit, second order accurate and energy-preserving. For the space discretization we use a mixed formulation with high-order spectral elements and mass-lumping. A strategy is proposed for an efficient and accurate computation of the pressure contribution with a new definition of the discrete Grad-div operator. Finally, we consider linear wave propagation problems in nearly-incompressible hyperelastic solids subject to static preload. (10.1016/j.wavemoti.2025.103594)
    DOI : 10.1016/j.wavemoti.2025.103594
  • Hommage à Peter Lax (1926-2025)
    • Allaire Grégoire
    • Berestycki Henri
    • Golse François
    • Rauch Jeffrey
    Matapli, Société de Mathématiques Appliquées et Industrielles (SMAI), 2025, 138, pp.101-124. Peter David Lax est décédé le 16 mai 2025 à l'âge de 99 ans. C'est une figure historique des mathématiques et du calcul scientifique qui nous a quittés après une vie extrêmement riche et féconde au cours de laquelle il a posé les bases mathématiques de la mécanique des fluides compressibles, du calcul numérique des ondes de choc, de la théorie du scattering, des systèmes intégrables, des solitons et de l'analyse numérique des équations aux dérivées partielles. Son influence a été et est toujours immense, aussi bien en mathématiques que dans d'autres disciplines comme, par exemple, la mécanique des fluides numérique. Après une première section qui donnera quelques repères biographiques et tracera son portrait, les trois sections suivantes essaieront de montrer sur quelques exemples la profondeur de ses idées et leur impact durable qui façonnent les mathématiques appliquées aujourd'hui encore.
  • Finite elements approximation of the boundary value problems of geodesics
    • Le Ruz Gaël
    • Lombardi Damiano
    , 2025. In this contribution we investigate the numerical approximation of the boundary value problems of geodesics, i.e. the log operation, on Riemannian manifolds. In particular, by leveraging the variational formulation of the geodesics problem, we propose a finite elements discretisation in time. We also investigate the numerical approximation of the Hessian of the squared distance with a sensitivity analysis on the first problem.
  • Error-Based mesh selection for efficient numerical simulations with variable parameters
    • Dornier Hugo
    • Le Maître Olivier P
    • Congedo Pietro Marco
    • Salah El Din Itham
    • Marty Julien
    • Bourasseau Sébastien
    , 2025. Advanced numerical simulations often depend on mesh refinement techniques to manage discretization errors in complex models and reduce computational costs. This work concentrates on Adaptive Mesh Refinement (AMR) for steady-state solutions, which uses error estimators to iteratively refine the mesh locally and gradually tailor it to the solution. AMR requires evaluating the solution across a series of meshes. When solving the model for multiple operating conditions, such as in uncertainty quantification studies, full systematic adaptation can cause significant computational overhead. To mitigate this, the Error-based Mesh Selection (EMS) method is introduced to decrease the cost of adaptation. For each operating condition, EMS seeks to choose, from a library of pre-adapted meshes, the one that minimizes the discretization error. A key feature of this approach is the use of Gaussian Process models to predict the solution errors for each mesh in the library. These error models are built solely from the library's meshes and their solutions, using restriction errors as proxies for discretization errors, thereby avoiding additional model evaluations. The EMS method is tested on an analytical shock problem and a supersonic scramjet configuration, showing near-optimal mesh selection. The influence of library size on the resulting error level is also examined.
  • On the stability of the invariant probability measures of McKean-Vlasov equations
    • Cormier Quentin
    Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2025, 61 (4). We study the long-time behavior of some McKean-Vlasov stochastic differential equations used to model the evolution of large populations of interacting agents. We give conditions ensuring the local stability of an invariant probability measure. Lions derivatives are used in a novel way to obtain our stability criteria. We obtain results for non-local McKean-Vlasov equations on $\mathbb{R}^d$ and for McKean-Vlasov equations on the torus where the interaction kernel is given by a convolution. On $\mathbb{R}^d$, we prove that the location of the roots of an analytic function determines the stability. On the torus, our stability criterion involves the Fourier coefficients of the interaction kernel. In both cases, we prove the convergence in the Wasserstein metric $W_1$ with an exponential rate of convergence. (10.1214/24-AIHP1504)
    DOI : 10.1214/24-AIHP1504
  • Scaling limits for a population model with growth, division and cross-diffusion
    • Doumic Marie
    • Hecht Sophie
    • Hoffmann Marc
    • Peurichard Diane
    Mathematical Models and Methods in Applied Sciences, World Scientific Publishing, 2025, 35 (12), pp.2611-2660. Originally motivated by the morphogenesis of bacterial microcolonies, the aim of this article is to explore models through different scales for a spatial population of interacting, growing and dividing particles. We start from a microscopic stochastic model, write the corresponding stochastic differential equation satisfied by the empirical measure, and rigorously derive its mesoscopic (mean-field) limit. Under smoothness and symmetry assumptions for the interaction kernel, we then obtain entropy estimates, which provide us with a localization limit at the macroscopic level. Finally, we perform a thorough numerical study in order to compare the three modeling scales. (10.1142/S0218202525500472)
    DOI : 10.1142/S0218202525500472
  • Wave turbulence, thermalization and multimode locking in optical fibers
    • Ferraro Mario
    • Baudin Killian
    • Gervaziev Mikhail D
    • Fusaro Adrien
    • Picozzi A.
    • Garnier Josselin
    • Millot G.
    • Kharenko Denis S.
    • Podivilov Evgeniy V.
    • Babin Sergey A.
    • Mangini Fabio
    • Wabnitz Stefan W
    Physica D: Nonlinear Phenomena, Elsevier, 2025, 481, pp.134758. We present a comprehensive overview of recent advances in theory and experiments on complex light propagation phenomena in nonlinear multimode fibers. On the basis of the wave turbulence theory, we derive kinetic equations describing the out-of-equilibrium process of optical thermalization toward the Rayleigh–Jeans (RJ) equilibrium distribution. Our theory is applied to explain the effect of beam self-cleaning (BSC) in graded-index (GRIN) fibers, whereby a speckled beam transforms into a bell-shaped beam at the fiber output as the input peak power grows larger. Although the output beam is typically dominated by the fundamental mode of the fiber, higher-order modes (HOMs) cannot be fully depleted, as described by the turbulence cascades associated to the conserved quantities. We theoretically explore the role of random refractive index fluctuations along the fiber core, and show how these imperfections may turn out to assist the observation of BSC in a practical experimental setting. This conclusion is supported by the derivation of wave turbulence kinetic equations that account for the presence of a time-dependent disorder (random mode coupling). The kinetic theory reveals that a weak disorder accelerates the rate of RJ thermalization and beam cleaning condensation. On the other hand, although strong disorder is expected to suppress wave condensation, the kinetic equation reveals that an out-of-equilibrium process of condensation and RJ thermalization can occur in a regime where disorder predominates over nonlinearity. In general, the kinetic equations are validated by numerical simulations of the generalized nonlinear Schrodinger equation. We outline a series of recent experiments, which permit to confirm the statistical mechanics approach for describing beam propagation and thermalization. For example, we highlight the demonstration of entropy growth, and point out that there are inherent limits to peak-power scaling in multimode fiber lasers. We conclude by pointing out the experimental observation that BSC is accompanied by an effect of modal phase-locking. From the one hand this explains the observed preservation of the spatial coherence of the beam, but also it points to the need of extending current descriptions in future research. (10.1016/j.physd.2025.134758)
    DOI : 10.1016/j.physd.2025.134758
  • Sensitivity analysis of a flow redistribution model for a multidimensional and multifidelity simulation of fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    • de Lambert Stanislas
    • Garnier Josselin
    • Leturcq Bertrand
    • Lamorte Nicolas
    Nuclear Engineering and Design, Elsevier, 2025, 443, pp.114259. In the core of nuclear reactors, fluid–structure interaction and intense irradiation lead to progressive deformation of fuel assemblies. When this deformation is significant, it can lead to additional costs and longer fuel unloading and reloading operations. Therefore, it is preferable to adopt a fuel management that avoids excessive deformation and interactions between fuel assemblies. However, the prediction of deformation and interactions between fuel assemblies is uncertain. Uncertainties affect neutronics, thermohydraulics and thermomechanics parameters. Indeed, the initial uncertainties are propagated over several successive power cycles of twelve months each through the coupling of non-linear, nested and multidimensional thermal–hydraulic and thermomechanical simulations. In this article, we set out to study the hydraulic contribution and quantify the associated uncertainty. To achieve this objective, we develop a multi-stage approach to carry out an initial sensitivity analysis, highlighting the most influential parameters in the hydraulic model. By optimally adjusting these parameters, we aim to obtain a more accurate description of the flow redistribution phenomenon in the reactor core. The aim of the sensitivity analysis presented in this article is to construct an accurate and suitable surrogate model that represents the in-core lateral hydraulic forces in a given state. This surrogate model could then be coupled with a thermomechanical model to quantify the final uncertainty in the simulation of fuel assembly bow within a pressurized water reactor. This approach will provide a better understanding of the interactions between hydraulic and thermomechanical phenomena, thereby improving the reliability and accuracy of the simulation results. (10.1016/j.nucengdes.2025.114259)
    DOI : 10.1016/j.nucengdes.2025.114259
  • Sensitivity analysis of a flow redistribution model for a multidimensional and multifidelity simulation of fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    • de Lambert Stanislas
    • Garnier Josselin
    • Leturcq Bertrand
    • Lamorte Nicolas
    Nuclear Engineering and Design, Elsevier, 2025, 443, pp.114259. (10.1016/j.nucengdes.2025.114259)
    DOI : 10.1016/j.nucengdes.2025.114259
  • Interactions and opportunities at the crossroads of deep probabilistic modeling and statistical inference through Markov Chains Monte Carlo
    • Grenioux Louis
    , 2025. This thesis advances the field of sampling, a cornerstone of Bayesian inference, computational physics, and probabilistic modeling, where the goal is to generate samples from a known probability density. A related challenge arises in generative modeling, which seeks to produce new data resembling a given dataset, a problem that has seen major breakthroughs through recent advances in deep learning. The central aim of this work is to leverage modern generative models to enhance classical sampling frameworks. The study begins by examining the inherent difficulties of multi-modal sampling, identifying key limitations of both classical and advanced Monte Carlo methods. It then explores the integration of pre-trained normalizing flows into traditional Monte Carlo schemes, providing practical guidance on their performance across diverse target distributions. Building on this, diffusion models are incorporated into advanced annealed Monte Carlo methods, revealing both their potential and their limitations. The work also investigates how diffusion models can be embedded within a variational inference framework. In parallel, it proposes a learning-free diffusion-based sampler that replaces neural approximators with Monte Carlo estimators. Finally, these enhanced sampling strategies are applied to the training of energy-based models, introducing a novel algorithm in which a normalizing flow serves as an auxiliary sampler to facilitate the training of these expressive yet challenging generative models.
  • High Performance Krylov Subspace Solvers with Preconditioning and Deflation
    • Spillane Nicole
    , 2025. Linear solvers are an essential tool for the accurate numerical simulation of large scale problems. My work addresses the analysis, development and application of linear solvers for problems that arise in science or the industry. My particular focus is on a class of iterative linear solvers called the Krylov subspace methods. Together with the accelerators that are preconditioning and deflation, Krylov subspace methods offer a flexible framework for a wide range of problem. The contributions in this manuscript divide into three main topics. A first set of results is for symmetric positive definite matrices solved by the conjugate gradient method preconditioned by domain decomposition. A unified theory of spectral coarse spaces is developed, a new fully algebraic solver is introduced, and open-source software is made available to apply these methods. The second set of results addresses multipreconditioning, a technique that allows users to exploit not just one, but several preconditioners. At each iteration, the space in which the solution is optimized grows much faster than with classical preconditioning. This can significantly accelerate convergence but multipreconditioned iterations are also more expensive. A significant contribution is adaptive multipreconditioning in which it is decided on the fly whether to apply a preconditioner or a multipreconditioner. This work is most advanced for symmetric positive definite problems but non-symmetric preconditioners and non-symmetric problems are also considered. Finally, the last set of results is for non-Hermitian problems solved by GMRES. Convergence of GMRES is still not fully predictable for general matrices despite it having been an active field of research for many years already. The contributions in this manuscript are new convergence bounds that make apparent the role played by the preconditioner, the deflation operator and the choice of a weighted inner product. An important finding is that, if the matrix is positive definite, it is a good strategy to precondition its Hermitian part with a Hermitian positive definite preconditioner. Convergence then depends on how well the Hermitian part is preconditioned and on how non-Hermitian the problem is. Finally, a new spectral deflation space is proposed to improve the term in the bound that depends on the non-Hermitianness of the problem. This has the effect of accelerating convergence in practice too.
  • Optimal business model adaptation plan for a company under a transition scenario
    • Ndiaye Elisa
    • Bezat Antoine
    • Gobet Emmanuel
    • Guivarch Céline
    • Jiao Ying
    , 2024. Climate stress-tests aim at projecting the financial impacts of climate change, covering both transition and physical risks under given macro scenarios. However, in practice, transition risk has been the main focus of supervisory and academic exercises, and existing tools to downscale these macroeconomic projections to the firm level remain limited. We develop a methodology to downscale sector-level trajectories into firm-level projections for credit risk stress-tests. The approach combines probabilistic modeling with stochastic control to capture firm-level uncertainty and optimal decision-making. It can be applied to any transition scenario or sector and highlights how firm-level characteristics such as initial intensity, abatement cost, and exposure to uncertainty shape heterogeneous firm-level responses to the transition. The model explicitly incorporates firm-level business uncertainty through stochastic dynamics on relative emissions and sales, which affect both optimal decisions and resulting financial projections. Firms’ rational behavior is modeled as a stochastic minimization problem, solved numerically through a method we call Backward Sampling. Illustrating our method with the NGFS transition scenarios and three types of companies (Green, Brown and Average), we show that firm-specific intensity reduction strategies yield significantly different financial outcomes compared to assuming uniform sectoral decarbonisation rates. Moreover, investing an amount equivalent to the total carbon tax paid at a given date is limited by its lack of a forward-looking feature, making it insufficient to buffer against future carbon shocks in a disorderly transition. This highlights the importance of firm-level granularity in climate risk assessments. By explicitly modeling firm heterogeneity and optimal decision-making under uncertainty, our methodology complements existing approaches to granular transition risk assessment and contributes to the ongoing development of scenario-based credit risk projections at the firm level.
  • Uncertainty quantification applied to fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    , 2025. In the core of nuclear reactors, fluid-structure interactions combined with intense irradiation lead to progressive deformations of fuel assemblies over successive operating cycles. When these deformations become significant, they can disrupt the positioning of the assemblies, damage spacer grids, or block and delay control rod insertion—compromising reactor safety and increasing operational costs. Understanding and controlling these deformations is therefore essential. However, their simulation remains uncertain due to the complexity of the involved multiphysics couplings: thermo-hydraulic and thermo-mechanical models are highly nonlinear, high-dimensional, and interact over several irradiation cycles, making uncertainty propagation difficult to assess and control. This thesis aims to develop a robust methodology for uncertainty quantification (UQ). The methodology must account for the coupled, nonlinear, and computationally expensive nature of the simulations while providing analysis tools to improve predictive reliability. To this end, we aim to address the following questions:(Q1) What is the magnitude of uncertainties in the results of coupled simulations?(Q2) What levers are available to reduce these uncertainties?(Q3) How does a design change affect simulation outcomes?The proposed approach begins with a separate analysis of the thermo-hydraulic and mechanical codes to identify their main sources of uncertainty using sensitivity analysis techniques. Surrogate models are then developed for each domain to reduce computational cost while maintaining the accuracy required for UQ. These surrogate models are finally coupled through an efficient strategy to simulate a full irradiation cycle and propagate uncertainties in a controlled manner. The methodology is structured around four main objectives:(O1) Identify and characterize sources of uncertainty throughout the simulation chain.(O2) Develop surrogate models adapted to uncertainty propagation and sensitivity analyses.(O3) Identify influential parameters and nonlinear interactions through global sensitivity analysis.(O4) Quantify and, where possible, reduce uncertainties to guide modeling or experimental efforts.In summary, this thesis proposes an uncertainty quantification approach tailored to complex multiphysics simulations, with the ultimate goal of improving the safety, reliability, and cost-efficiency of nuclear reactor operation.
  • Stability analysis of a new curl-based full field reconstruction method in 2D isotropic nearly-incompressible elasticity
    • Chibli Nagham
    • Genet Martin
    • Imperiale Sébastien
    , 2025. In time-harmonic elastography, the shear modulus is typically inferred from full field displacement data by solving an inverse problem based on the time-harmonic elastodynamic equation. In this paper, we focus on nearly incompressible media, which pose robustness challenges, especially in the presence of noisy data. Restricting ourselves to 2D and considering an isotropic, linearly deforming medium, we reformulate the problem as a non-autonomous hyperbolic system and, through theoretical analysis, establish existence, uniqueness, and stability of the inverse problem. To ensure robustness with noisy data, we propose a least-squares approach with regularization. The convergence properties of the method are verified numerically using in silico data.
  • Semi-discrete convergence analysis of a numerical method for waves in nearly-incompressible media with spectral finite elements
    • Ramiche Zineb
    • Imperiale Sébastien
    , 2025. In this work, we present a convergence analysis of a fully explicit high-order space discretisation approach for the computation of elastic field propagation in a nearly incompressible media. Our approach relies on the use of high-order continuous spectral finite elements with mass-lumping. We present an approach that is valid for full hexahedral and quadrilateral meshes, where the elastic field is sought in the space of Q_k continuous finite elements and the pressure in Q_k-2 discontinuous finite elements. Furthermore, we provide proof of the stability of the finite element discretization. This allows us to carry out error estimates for the semi-discrete problem in space, accounting in particular for quadrature errors.
  • Sensitivity Analysis of Emissions Markets: A Discrete-Time Radner Equilibrium Approach
    • Crépey Stéphane
    • Tadese Mekonnen
    • Vermandel Gauthier
    , 2025. Emissions markets play a crucial role in reducing pollution by encouraging firms to minimize costs. However, their structure heavily relies on the decisions of policymakers, on the future economic activities, and on the availability of abatement technologies. This study examines how changes in regulatory standards, firms' abatement costs, and emissions levels affect allowance prices and firms' efforts to reduce emissions. This is done in a Radner equilibrium framework encompassing inter-temporal decision-making, uncertainty, and a comprehensive assessment of the market dynamics and outcomes. The results of this research have the potential to assist policymakers in enhancing the structure and efficiency of emissions trading systems, through an in-depth comprehension of the reactions of market stakeholders towards different market situations.
  • Curvature penalization of strongly anisotropic interfaces models and their phase-field approximation
    • Babadjian Jean-François
    • Buet Blanche
    • Goldman Michael
    , 2025. This paper studies the effect of anisotropy on sharp or diffuse interfaces models. When the surface tension is a convex function of the normal to the interface, the anisotropy is said to be weak. This usually ensures the lower semicontinuity of the associated energy. If, however, the surface tension depends on the normal in a nonconvex way, this so-called strong anisotropy may lead to instabilities related to the lack of lower semicontinuity of the functional. We investigate the regularizing effects of adding a higher order term of Willmore type to the energy. We consider two types of problems. The first one is an anisotropic nonconvex generalization of the perimeter, and the second one is an anisotropic nonconvex Mumford-Shah functional. In both cases, lower semicontinuity properties of the energies with respect to a natural mode of convergence are established, as well as Γ-convergence type results by means of a phase field approximation. In comparison with related results for curvature dependent energies, one of the original aspects of our work is that, in the context of free discontinuity problems, we are able to consider singular structures such as crack-tips or multiple junctions.
  • Self-interacting approximation to McKean-Vlasov long-time limit: a Markov chain Monte Carlo method
    • Du Kai
    • Ren Zhenjie
    • Suciu Florin
    • Wang Songbo
    Journal de Mathématiques Pures et Appliquées, Elsevier, 2025, 205, pp.103782. For a certain class of McKean-Vlasov processes, we introduce proxy processes that substitute the mean-field interaction with self-interaction, employing a weighted occupation measure. Our study encompasses two key achievements. First, we demonstrate the ergodicity of the self-interacting dynamics, under broad conditions, by applying the reflection coupling method. Second, in scenarios where the drifts are negative intrinsic gradients of convex mean-field potential functionals, we use entropy and functional inequalities to demonstrate that the stationary measures of the self-interacting processes approximate the invariant measures of the corresponding McKean-Vlasov processes. As an application, we show how to learn the optimal weights of a two-layer neural network by training a single neuron. (10.1016/j.matpur.2025.103782)
    DOI : 10.1016/j.matpur.2025.103782
  • Stochastic and surrogate assisted multiobjective optimization with CMA-ES
    • Gharafi Mohamed
    , 2025. This thesis tackles expensive multiobjectiveoptimization by extending the COMO-CMA-ES algorithmwith surrogate models to reduce costly functionevaluations. Built on the SOFOMORE framework, whichdecomposes a multiobjective problem into singleobjectivesubproblems, in COMO-CMA-ES, each optimizedby a CMA-ES instance, the approach leveragessurrogate-assistance to accelerate convergence.Rather than modeling each objective separately, alinear-quadratic surrogate inspired by the lq-CMA-ESalgorithm is trained directly on the fitness functionbased on the Uncrowded Hypervolume Improvement(UHVI) indicator. This simplifies model management,maintains efficiency.Experimental results show up to a threefold reductionin the number of function evaluations for problemswith quadratic structure, while keeping similariteration counts. A surrogate-based parallelizationstrategy further reduces the number of required iterations,improving performance in computationally expensivecontexts.In this work we also propose a variant of CMA-ES assistedby surrogate models that preserves invarianceto monotone transformations of the objective function.This property enhances the algorithm’s robustness,making it more reliable across a wide range of problemformulations.Finally, the thesis investigates the scalability of theSOFOMORE framework and observes a quadratic decayin convergence rate as the desired Pareto front resolutionincreases. This phenomenon, explained throughan idealized simulation model, is intrinsic to UHVI andmore generally to hypervolume-based formulations.Overall, the work highlights both the potential andthe structural limits of surrogate-assisted multiobjectiveoptimization for expensive problems.
  • Generation of Realistic Geolocated Agendas in a Digital Twin using GPS-based Mobility Dataset
    • Colomb Maxime
    , 2025. This work aims to generate individual agendas and locate each individual activity. Probabilistic tools are generated to provide a framework of individual behaviours. Such information could then be injected in a digital twin of community and individuals in order to model how places are occupied and how individuals meet. It uses the well detailed dataset provided by the NetMob 2025 Data Challenge.
  • Stability of non-conservative cross diffusion model and approximation by stochastic particle systems
    • Bansaye Vincent
    • Bertolino Alexandre
    • Moussa Ayman
    , 2025. We study the stability of non-conservative deterministic cross diffusion models and prove that they are approximated by stochastic population models when the populations become locally large. In this model, the individuals of two species move, reproduce and die with rates sensitive to the local densities of the two species. Quantitative estimates are given and convergence is obtained soon as the population per site and the number of sites go to infinity. The proofs rely on the extension of stability estimates via duality approach under a smallness condition and the development of large deviation estimates for structured population models, which are of independent interest. The proofs also involve martingale estimates in H^{-1} and improve the approximation results in the conservative case as well.
  • Nonparametric hazard rate estimation with associated kernels and minimax bandwidth choice
    • Breuil Luce
    • Kaakai Sarah
    , 2025. <div><p>In this paper, we introduce a general theoretical framework for nonparametric hazard rate estimation using associated kernels, whose shapes depend on the point of estimation. Within this framework, we establish rigorous asymptotic results, including a second-order expansion of the MISE, and a central limit theorem for the proposed estimator. We also prove a new oracle-type inequality for both local and global minimax bandwidth selection, extending the Goldenshluger–Lepski method to the context of associated kernels. Our results propose a systematic way to construct and analyze new associated kernels. Finally, we show that the general framework applies to the Gamma kernel, and we provide several examples of applications on simulated data and experimental data for the study of aging. </p></div> (10.48550/arXiv.2509.24535)
    DOI : 10.48550/arXiv.2509.24535
  • Hybrid stochastic-structural modelling of particle-laden turbulent flows based on wavelet reconstruction
    • Letournel Roxane
    • Morhain Clément
    • Massot Marc
    • Vié Aymeric
    , 2025. Reduced-order modelling and simulation of turbulent particle-laden flows is required in numerous configurations, where the whole spectrum of turbulent scales through DNS is out of reach. Whereas structural or stochastic models have be derived in order to provide a synthetic turbulent model for the non-resolved scales of the gaseous flow field, reproducing preferential concentration is challenging because it requires capturing both spatial and temporal correlations. We present a novel reduced-order framework that overcomes this limitation by combining wavelet-based structural modelling with stochastic evolution. Using compactly supported divergence-free wavelets within a multiresolution analysis, the method provides direct control over spatial structures and correlations of synthetic multiscale velocity fields. In particular, a dedicated procedure enables to enforce a prescribed turbulent energy spectrum despite the nonlocal contribution in Fourier space of the wavelet basis functions. The stochastic evolution of wavelet coefficients further ensures consistent temporal correlations. The proposed framework is evaluated in homogeneous isotropic turbulence under a fully reduced setting, where all turbulent scales must be provided by the model. Results<p>show that it accurately reproduces preferential concentration across a wide range of Stokes numbers, achieving closer agreement with DNS data than classical Fourierbased kinematic simulations. This establishes a versatile and physically consistent turbulence model that combines structural fidelity with stochastic dynamics, offering a new tool to investigate particle–turbulence interactions.</p>