Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2020

  • Modulation of homogeneous and isotropic turbulence by sub-Kolmogorov particles: Impact of particle field heterogeneity
    • Letournel Roxane
    • Laurent Frédérique
    • Massot Marc
    • Vié Aymeric
    International Journal of Multiphase Flow, Elsevier, 2020, 125, pp.103233. The modulation of turbulence by sub-Kolmogorov particles has been thoroughly characterized in the literature, showing either enhancement or reduction of kinetic energy at small or large scale depending on the Stokes number and the mass loading. However , the impact of a third parameter, the number density of particles, has not been independently investigated. In the present work, we perform direct numerical simulations of decaying Homogeneous Isotropic Turbulence loaded with monodisperse sub-Kolmogorov particles, varying independently the Stokes number, the mass loading and the number density of particles. Like previous investigators, crossover and modulations of the fluid energy spectra are observed consistently with the change in Stokes number and mass loading. Additionally, DNS results show a clear impact of the particle number density, promoting the energy at small scales while reducing the energy at large scales. For high particle number density, the turbulence statistics and spectra become insensitive to the increase of this parameter, presenting a two-way asymptotic behavior. Our investigation identifies the energy transfer mechanisms, and highlights the differences between the influence of a highly concentrated disperse phase (high particle number density, limit behavior) and that of heterogeneous concentration fields (low particle number density). In particular, a measure of this heterogeneity is proposed and discussed which allows to identify specific regimes in the evolution of turbulence statistics and spectra. (10.1016/j.ijmultiphaseflow.2020.103233)
    DOI : 10.1016/j.ijmultiphaseflow.2020.103233
  • Modèles Génératifs Profonds : l'échantillonnage en haute dimension revisité
    • Moulines Eric
    , 2020. Les modèles génératifs (GM) permettent d’inférer des modèles de loi pour des observations structurées de grande dimension, qui sont typiques de l'IA moderne. Les modèles génératifs peuvent également être utilisés pour échantillonner de nouveaux exemples, en reliant le problème d'inférence à l'échantillonnage.L'apprentissage de modèles génératifs profonds (MGD) capables de capturer les structures de dépendance complexe de lois à partir de grands ensemble de données dans un cadre non-ou semi-supervisé apparaît aujourd'hui comme l'un des principaux défis de l'IA. Les modèles génératifs profonds ont de nombreuses applications passionnantes pour résoudre la pénurie de données en générant de " nouveaux " exemples, pour préserver la confidentialité en diffusant le modèle génératif à laplace des données mais aussi pour détecter les observations aberrantes.Dans cette présentation, je vais couvrir trois directions de recherche sur lesquelles je travaille actuellement.Une première approche est basée sur la minimisation de l'entropie croisée (divergence de Kullback-Leibler) entre la distribution des observations et un modèle paramétré soit par des réseaux de neurones profonds, soit par des fonctions d’energies plus adaptées, reliant les modèles génératifs et les « energy based models » quiont été introduits pour l’apprentissage non-supervisé (mais dans un cadre non-probabiliste). Cette approche est séduisante mais elle pose des problèmes de calcul difficiles, liés à la nécessité d'estimer la constante de normalisation et son gradient.Une deuxième approche repose sur les méthodes d'entropie maximale. Cette approche trouve son origine dans les quantités de physique statistique pour apprendre une distribution maximisant l'entropie sous contrainte de moment, qui sont construites à partir d'unereprésentation issue d’un réseau de neurones profondsUne troisième approche consiste à utiliser des auto-encodeurs variationnels (Variational Autoencoder, VAE), un cas particulier d'inférence variationnelle. Les VAE apprennent conjointement un algorithme pour générer des échantillons à partir de la distribution ainsi qu'un espace latent qui résume la distribution des observations.J’illustrerai ces approches par des exemples et je discuterai des challenges théoriques et numériques que ces approches posent. <a href="https://videos-rennes.inria.fr/video/HJt6vEaXI" target="_blank">[Vidéo en ligne]</a>
  • Comparative study of harmonic and Rayleigh-Ritz procedures with applications to deflated conjugate gradients
    • Venkovic Nicolas
    • Mycek Paul
    • Giraud Luc
    • Le Maitre Olivier
    , 2020. Harmonic Rayleigh-Ritz and Raleigh-Ritz projection techniques are compared in the context of iterative procedures to solve for small numbers of least dominant eigenvectors of large symmetric positive definite matrices. The procedures considered are (i) locally optimal conjugate gradient (CG) methods, i.e., LOBCG, (ii) thick-restart Lanczos methods, and (iii) recycled linear CG solvers, e.g., eigCG. Approaches based on principles of local optimality are adapted to enable the use of harmonic projection techniques. Upon investigating the search spaces generated by these methods, it is found that LOBCG and thick-restart Lanczos methods can be adapted, which is not the case of eigCG. Explanations are also given as to why eigCG works so well in comparison to other recycling strategies. Numerical experiments show that, while approaches based on harmonic projections consistently result in a faster convergence of eigen-residuals, they generally do not yield better convergence of the forward error of eigenvectors, until the Rayleigh quotients have converged. Then, the effect of recycling strategies is investigated on deflation for the resolution of sequences of linear systems. While non-locally optimal recycling strategies need to solve more linear systems in order to fully develop their effect on convergence, they eventually reach similar behaviors to those of locally optimal recycling procedures. While implementations based on Init-CG are robust for systems with multiple right-hand sides, this is not the case for multiple operators.
  • Quantitative modeling links in vivo microstructural and macrofunctional organization of human and macaque insular cortex, and predicts cognitive control abilities
    • Menon Vinod
    • Gallardo Guillermo
    • Pinsk Mark
    • Nguyen Van-Dang
    • Li Jing-Rebecca
    • Cai Weidong
    • Wassermann Demian
    , 2020. (10.1101/662601)
    DOI : 10.1101/662601
  • Adaptive Bayesian SLOPE—High-dimensional Model Selection with Missing Values
    • Jiang Wei
    • Bogdan Malgorzata
    • Josse Julie
    • Miasojedow Blazej
    • Rockova Veronika
    , 2020. We consider the problem of variable selection in high-dimensional settings with missing observations among the covariates. To address this relatively understudied problem, we propose a new synergistic procedure -- adaptive Bayesian SLOPE -- which effectively combines the SLOPE method (sorted l1 regularization) together with the Spike-and-Slab LASSO method. We position our approach within a Bayesian framework which allows for simultaneous variable selection and parameter estimation, despite the missing values. As with the Spike-and-Slab LASSO, the coefficients are regarded as arising from a hierarchical model consisting of two groups: (1) the spike for the inactive and (2) the slab for the active. However, instead of assigning independent spike priors for each covariate, here we deploy a joint "SLOPE" spike prior which takes into account the ordering of coefficient magnitudes in order to control for false discoveries. Through extensive simulations, we demonstrate satisfactory performance in terms of power, FDR and estimation bias under a wide range of scenarios. Finally, we analyze a real dataset consisting of patients from Paris hospitals who underwent a severe trauma, where we show excellent performance in predicting platelet levels. Our methodology has been implemented in C++ and wrapped into an R package ABSLOPE for public use.
  • Computing bi-tangents for transmission belts
    • Chouly Franz
    • Loubani Jinan
    • Lozinski Alexei
    • Méjri Bochra
    • Merito Kamil
    • Passos Sébastien
    • Pineda Angie
    , 2020. In this note, we determine the bi-tangents of two rotated ellipses, and we compute the coordinates of their points of tangency. For these purposes, we develop two approaches. The first one is an analytical approach in which we compute analytically the equations of the bi-tangents. This approach is valid only for some cases. The second one is geometrical and is based on the determination of the normal vector to the tangent line. This approach turns out to be more robust than the first one and is valid for any configuration of ellipses.
  • Nonparametric imputation by data depth
    • Mozharovskyi Pavlo
    • Josse Julie
    • Husson François
    Journal of the American Statistical Association, Taylor & Francis, 2020, 115 (529), pp.241-253. The presented methodology for single imputation of missing values borrows the idea from data depth --- a measure of centrality defined for an arbitrary point of the space with respect to a probability distribution or a data cloud. This consists in iterative maximization of the depth of each observation with missing values, and can be employed with any properly defined statistical depth function. On each single iteration, imputation is narrowed down to optimization of quadratic, linear, or quasiconcave function being solved analytically, by linear programming, or the Nelder-Mead method, respectively. Being able to grasp the underlying data topology, the procedure is distribution free, allows to impute close to the data, preserves prediction possibilities different to local imputation methods (k-nearest neighbors, random forest), and has attractive robustness and asymptotic properties under elliptical symmetry. It is shown that its particular case --- when using Mahalanobis depth --- has direct connection to well known treatments for multivariate normal model, such as iterated regression or regularized PCA. The methodology is extended to the multiple imputation for data stemming from an elliptically symmetric distribution. Simulation and real data studies positively contrast the procedure with existing popular alternatives. The method has been implemented as an R-package. (10.1080/01621459.2018.1543123)
    DOI : 10.1080/01621459.2018.1543123
  • Maximization of the Steklov Eigenvalues with a Diameter Constraint
    • Al Sayed Abdelkader
    • Bogosel Beniamin
    • Henrot Antoine
    • Nacry Florent
    SIAM Journal on Mathematical Analysis, Society for Industrial and Applied Mathematics, 2020, 53 (1), pp.710-729. In this paper, we address the problem of maximizing the Steklov eigenvalues with a diameter constraint. We provide an estimate of the Steklov eigenvalues for a convex domain in terms of its diameter and volume and we show the existence of an optimal convex domain. We establish that balls are never maximizers, even for the first non-trivial eigenvalue that contrasts with the case of volume or perimeter constraints. Under an additional regularity assumption, we are able to prove that the Steklov eigenvalue is multiple for the optimal domain. We illustrate our theoretical results by giving some optimal domains in the plane thanks to a numerical algorithm. (10.1137/20M1335042)
    DOI : 10.1137/20M1335042
  • Second order linear differential equations with analytic uncertainties: stochastic analysis via the computation of the probability density function
    • Jornet Marc
    • Calatayud Julia
    • Le Maitre Olivier
    • Cortés Juan Carlos
    Journal of Computational and Applied Mathematics, Elsevier, 2020, 374, pp.112770. This paper concerns the analysis of random second order linear differential equations. Usually, solving these equations consists of computing the first statistics of the response process, and that task has been an essential goal in the literature. A more ambitious objective is the computation of the solution probability density function. We present advances on these two aspects in the case of general random non-autonomous second order linear differential equations with analytic data processes. The Fröbenius method is employed to obtain the stochastic solution in the form of a mean square convergent power series. We demonstrate that the convergence requires the boundedness of the random input coefficients. Further, the mean square error of the Fröbenius method is proved to decrease exponentially with the number of terms in the series, although not uniformly in time. Regarding the probability density function of the solution at a given time, which is the focus of the paper, we rely on the law of total probability to express it in closed-form as an expectation. For the computation of this expectation, a sequence of approximating density functions is constructed by reducing the dimensionality of the problem using the truncated power series of the fundamental set. We prove several theoretical results regarding the pointwise convergence of the sequence of density functions and the convergence in total variation. The pointwise convergence turns out to be exponential under a Lipschitz hypothesis. As the density functions are expressed in terms of expectations, we propose a crude Monte Carlo sampling algorithm for their estimation. This algorithm is implemented and applied on several numerical examples designed to illustrate the theoretical findings of the paper. After that, the efficiency of the algorithm is improved by employing the control variates method. Numerical examples corroborate the variance reduction of the Monte Carlo approach. (10.1016/j.cam.2020.112770)
    DOI : 10.1016/j.cam.2020.112770
  • SPIX: a new software package to reveal chemical reactions at trace amounts in very complex mixtures from high-resolution mass spectra data sets
    • Nicol Edith
    • Xu Yao
    • Varga Zsuzsanna
    • Kinani Said
    • Bouchonnet Stéphane
    • Lavielle Marc
    Rapid Communications in Mass Spectrometry, Wiley, 2020. Rationale: High-resolution mass spectrometry-based non-targeted screening has a huge potential for applications in environmental sciences, engineering and regulation. However, it produces big data for which full appropriate processing is a real challenge; the development of processing software is the last building-block to enable large-scale use of this approach. Methods: A new software application, SPIX, has been developed to extract relevant information from high-resolution mass-spectrum datasets. Dealing with intrinsic sample variability and reducing operator subjectivity, it opens up opportunities and promising prospects in many areas of analytical chemistry. SPIX is freely available at: http://spix.webpopix.org. Results: Two features of the software are presented in the field of environmental analysis. An example illustrates how SPIX reveals photodegradation reactions in wastewater by fitting kinetic models to significant changes in ion abundance over time. A second example shows the ability of SPIX to detect photoproducts at trace amounts in river water, through comparison of datasets from samples taken before and after irradiation. Conclusions: SPIX has shown its ability to reveal relevant modifications between two series of big data sets, allowing for instance to study the consequences of a given event on a complex substrate. Most of alland this is to our knowledge the only software currently available allowing thatit can reveal and monitor any kind of reaction in all types of mixture.
  • Optimality conditions in variational form for non-linear constrained stochastic control problems
    • Pfeiffer Laurent
    Mathematical Control and Related Fields, AIMS, 2020, 10 (3), pp.493-526. Optimality conditions in the form of a variational inequality are proved for a class of constrained optimal control problems of stochastic differential equations. The cost function and the inequality constraints are functions of the probability distribution of the state variable at the final time. The analysis uses in an essential manner a convexity property of the set of reachable probability distributions. An augmented Lagrangian method based on the obtained optimality conditions is proposed and analyzed for solving iteratively the problem. At each iteration of the method, a standard stochastic optimal control problem is solved by dynamic programming. Two academical examples are investigated. (10.3934/mcrf.2020008)
    DOI : 10.3934/mcrf.2020008
  • High to Low pellet cladding gap heat transfer modeling methodology in an uncertainty quantification framework for a PWR Rod Ejection Accident with best estimate coupling
    • Delipei Gregory Kyriakos
    • Garnier Josselin
    • Le Pallec Jean-Charles
    • Normand Benoit
    EPJ N - Nuclear Sciences & Technologies, EDP Sciences, 2020, 6, pp.56. High to Low modeling approaches can alleviate the computationally expensive fuel modeling in nuclear reactor’s transient uncertainty quantification. This is especially the case for Rod Ejection Accident (REA) in Pressurized Water Reactors (PWR) were strong multi-physics interactions occur. In this work, we develop and propose a pellet cladding gap heat transfer (Hgap) High to Low modeling methodology for a PWR REA in an uncertainty quantification framework. The methodology involves the calibration of asimplified $Hgap$ model based on high fidelity simulations with the fuel-thermomechanics code ALCYONE1.The calibrated model is then introduced into the CEA developed CORPUS Best Estimate (BE) multi-physicscoupling between APOLLO3R© and FLICA4. This creates an Improved Best Estimate (IBE) coupling that is then used for an uncertainty quantification study. The results indicate that with IBE the distance to boiling crisis uncertainty is decreased from 57% to 42%. This is reflected to the decrease of the sensitivity of $Hgap$. In the BE coupling $Hgap$ was responsible for 50% of the output variance while in IBE it is close to 0. These results show the potential gain of High to Low approachez for $Hgap$ modeling in REA uncertainty analyses. (10.1051/epjn/2020018)
    DOI : 10.1051/epjn/2020018
  • Validation strategy of reduced-order two-fluid flow models based on a hierarchy of direct numerical simulations
    • Cordesse Pierre
    • Remigi Alberto
    • Duret Benjamin
    • Murrone Angelo
    • Ménard Thibaut
    • Demoulin François-Xavier
    • Massot Marc
    Flow, Turbulence and Combustion, Springer Verlag, 2020, 105 (4), pp.1381-1411. Whereas direct numerical simulation (DNS) have reached a high level of description in the field of atomization processes, they are not yet able to cope with industrial needs since they lack resolution and are too costly. Predictive simulations relying on reduced order modeling have become mandatory for applications ranging from cryotechnic to aeronautic combustion chamber liquid injection. Two-fluid models provide a good basis in order to conduct such simulations, even if recent advances allow to refine subscale modeling using geometrical variables in order to reach a unified model including separate phases and disperse phase descriptions based on high order moment methods. The simulation of such models has to rely on dedicated numerical methods and still lacks assessment of its predictive capabilities. The present paper constitutes a building block of the investigation of a hierarchy of test-cases designed to be amenable to DNS while close enough to industrial configurations, for which we propose a comparison of two-fluid compressible simulations with DNS data-bases. We focus in the present contribution on an air-assisted water atomization using a planar liquid sheet injector. Qualitative and quantitative comparisons with incompressible DNS allow us to identify and analyze strength and weaknesses of the reduced-order modeling and numerical approach in this specific configuration and set a framework for more refined models since they already provide a very interesting level of comparison on averaged quantities. (10.1007/s10494-020-00154-w)
    DOI : 10.1007/s10494-020-00154-w
  • Error estimates for phase recovering from phaseless scattering data
    • Novikov Roman
    • Sivkin Vladimir
    Eurasian Journal of Mathematical and Computer Applications, Eurasian National University, Kazakhstan (Nur-Sultan), 2020, 8 (1), pp.44-61. We study the simplest explicit formulas for approximate finding the complex scattering amplitude from modulus of the scattering wave function. We obtain detailed error estimates for these formulas in dimensions d = 3 and d = 2.
  • An Eco-Routing Algorithm for HEVs Under Traffic Conditions
    • Rhun Arthur Le
    • Bonnans Frédéric
    • Nunzio Giovanni De
    • Leroy Thomas
    • Martinon Pierre
    IFAC-PapersOnLine, Elsevier, 2020, 53 (2), pp.14242 - 14247. In a previous work, a bi-level optimization approach was presented for the energy management of Hybrid Electric Vehicles (HEVs), using a statistical model for traffic conditions. The present work is an extension of this framework to the eco-routing problem. The optimal trajectory is computed as the shortest path on a weighted graph whose nodes are (position, state of charge) pairs for the vehicle. The edge costs are provided by cost maps from an offline optimization at the lower level of small road segments. The error due to the discretization of the state of charge is proven to be linear if the cost maps are Lipschitz. The classical A * algorithm is used to solve the problem, with a heuristic based on a lower bound of the energy needed to complete the travel. The eco-routing method is compared to the fastest-path strategy by numerical simulations on a simple synthetic road network. (10.1016/j.ifacol.2020.12.1158)
    DOI : 10.1016/j.ifacol.2020.12.1158
  • Ergodic behavior of non-conservative semigroups via generalized Doeblin's conditions
    • Bansaye Vincent
    • Cloez Bertrand
    • Gabriel Pierre
    Acta Applicandae Mathematicae, Springer Verlag, 2020, 166 (1), pp.29-72. We provide quantitative estimates in total variation distance for positive semi-groups, which can be non-conservative and non-homogeneous. The techniques relies on a family of conservative semigroups that describes a typical particle and Doeblin's type conditions for coupling the associated process. Our aim is to provide quantitative estimates for linear partial differential equations and we develop several applications for population dynamics in varying environment. We start with the asymptotic profile for a growth diffusion model with time and space non-homogeneity. Moreover we provide general estimates for semigroups which become asymptotically homogeneous, which are applied to an age-structured population model. Finally, we obtain a speed of convergence for periodic semi-groups and new bounds in the homogeneous setting. They are are illustrated on the renewal equation. (10.1007/s10440-019-00253-5)
    DOI : 10.1007/s10440-019-00253-5
  • Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions
    • Akimoto Youhei
    • Auger Anne
    • Hansen Nikolaus
    Theoretical Computer Science, Elsevier, 2020, 832, pp.42-67. Quality gain is the expected relative improvement of the function value in a single step of a search algorithm. Quality gain analysis reveals the dependencies of the quality gain on the parameters of a search algorithm, based on which one can derive the optimal values for the parameters. In this paper, we investigate evolution strategies with weighted recombination on general convex quadratic functions. We derive a bound for the quality gain and two limit expressions of the quality gain. From the limit expressions, we derive the optimal recombination weights and the optimal step-size, and find that the optimal recombination weights are independent of the Hessian of the objective function. Moreover, the dependencies of the optimal parameters on the dimension and the population size are revealed. Differently from previous works where the population size is implicitly assumed to be smaller than the dimension, our results cover the population size proportional to or greater than the dimension. Simulation results show the optimal parameters derived in the limit approximates the optimal values in non-asymptotic scenarios. (10.1016/j.tcs.2018.05.015)
    DOI : 10.1016/j.tcs.2018.05.015
  • ADDITIVE MANUFACTURING SCANNING PATHS OPTIMIZATION USING SHAPE OPTIMIZATION TOOLS
    • Boissier M
    • Allaire G.
    • Tournier Christophe
    Structural and Multidisciplinary Optimization, Springer Verlag, 2020, 61, pp.2437–2466. This paper investigates path planning strategies for additive manufacturing processes such as powder bed fusion. The state of the art mainly studies trajectories based on existing patterns. Parametric optimization on these patterns or allocating them to the object areas are the main strategies. We propose in this work a more systematic optimization approach without any a priori restriction on the trajectories. The typical optimization problem is to melt the desired structure, without overheating (to avoid thermally induced residual stresses) and possibly with a minimal path length. The state equation is the heat equation with a source term depending on the scanning path. First, in a steady-state context, shape optimization tools are applied to trajec-tories. Second, for time-dependent problems, an optimal control method is considered instead. In both cases, gradient type algorithms are deduced and tested on 2-d examples. Numerical results are discussed, leading to a better understanding of the problem and thus to short-and long-term perspectives. (10.1007/s00158-020-02614-3)
    DOI : 10.1007/s00158-020-02614-3
  • Variance Reduction Methods and Multilevel Monte Carlo Strategy for Estimating Densities of Solutions to Random Second-Order Linear Differential Equations
    • Jornet Marc
    • Calatayud Julia
    • Le Maitre Olivier
    • Cortés Juan Carlos
    International Journal for Uncertainty Quantification, Begell House Publishers, 2020, 10 (5), pp.467-497. This paper concerns the estimation of the density function of the solution to a random nonautonomous second-order linear differential equation with analytic data processes. In a recent contribution, we proposed to express the density function as an expectation, and we used a standard Monte Carlo algorithm to approximate the expectation. Although the algorithms worked satisfactorily for most test problems, some numerical challenges emerged for others, due to large statistical errors. In these situations, the convergence of the Monte Carlo simulation slows down severely, and noisy features plague the estimates. In this paper, we focus on computational aspects and propose several variance reduction methods to remedy these issues and speed up the convergence. First, we introduce a pathwise selection of the approximating processes which aims at controlling the variance of the estimator. Second, we propose a hybrid method, combining Monte Carlo and deterministic quadrature rules, to estimate the expectation. Third, we exploit the series expansions of the solutions to design a multilevel Monte Carlo estimator. The proposed methods are implemented and tested on several numerical examples to highlight the theoretical discussions and demonstrate the significant improvements achieved.
  • Hölder-logarithmic stability in Fourier synthesis
    • Isaev Mikhail
    • Novikov Roman G
    Inverse Problems, IOP Publishing, 2020, 36 (12), pp.125003(17 pp.). We prove a Hölder-logarithmic stability estimate for the problem of finding a sufficiently regular compactly supported function v on R^d from its Fourier transform Fv given on [−r, r]^d. This estimate relies on a Hölder stable continuation of Fv from [−r, r]^d to a larger domain. The related reconstruction procedures are based on truncated series of Chebyshev polynomials. We also give an explicit example showing optimality of our stability estimates. (10.1088/1361-6420/abb5df)
    DOI : 10.1088/1361-6420/abb5df
  • SCALPEL3: a scalable open-source library for healthcare claims databases
    • Bacry Emmanuel
    • Gaiffas Stéphane
    • Leroy Fanny
    • Morel Maryan
    • Nguyen D.P.
    • Sebiat Youcef
    • Sun Dian
    International Journal of Medical Informatics, Elsevier, 2020.
  • State-constrained control-affine parabolic problems I: First and Second order necessary optimality conditions
    • Aronna M Soledad
    • Bonnans J. Frederic
    • Kröner Axel
    Set-Valued and Variational Analysis, Springer, 2020. In this paper we consider an optimal control problem governed by a semilinear heat equation with bilinear control-state terms and subject to control and state constraints. The state constraints are of integral type, the integral being with respect to the space variable. The control is multidimen-sional. The cost functional is of a tracking type and contains a linear term in the control variables. We derive second order necessary conditions relying on the concept of alternative costates and quasi-radial critical directions. The appendix provides an example illustrating the applicability of our results.
  • Fluctuation theory in the Boltzmann--Grad limit
    • Bodineau Thierry
    • Gallagher Isabelle
    • Saint-Raymond Laure
    • Simonella Sergio
    Journal of Statistical Physics, Springer Verlag, 2020, 180, pp.873–895. We develop a rigorous theory of hard-sphere dynamics in the kinetic regime, away from thermal equilibrium. In the low density limit, the empirical density obeys a law of large numbers and the dynamics is governed by the Boltzmann equation. Deviations from this behaviour are described by dynamical correlations, which can be fully characterized for short times. This provides both a fluctuating Boltzmann equation and large deviation asymptotics.
  • Orlicz Random Fourier Features
    • Chamakh Linda
    • Gobet Emmanuel
    • Szabó Zoltán
    Journal of Machine Learning Research, Microtome Publishing, 2020, 21 (145), pp.1−37. Kernel techniques are among the most widely-applied and influential tools in machine learning with applications at virtually all areas of the field. To combine this expressive power with computational efficiency numerous randomized schemes have been proposed in the literature, among which probably random Fourier features (RFF) are the simplest and most popular. While RFFs were originally designed for the approximation of kernel values, recently they have been adapted to kernel derivatives, and hence to the solution of large-scale tasks involving function derivatives. Unfortunately, the understanding of the RFF scheme for the approximation of higher-order kernel derivatives is quite limited due to the challenging polynomial growing nature of the underlying function class in the empirical process. To tackle this difficulty, we establish a finite-sample deviation bound for a general class of polynomial-growth functions under α-exponential Orlicz condition on the distribution of the sample. Instantiating this result for RFFs, our finite-sample uniform guarantee implies a.s. convergence with tight rate for arbitrary kernel with α-exponential Orlicz spectrum and any order of derivative.
  • Computing invariant sets of random differential equations using polynomial chaos
    • Breden Maxime
    • Kuehn Christian
    SIAM Journal on Applied Dynamical Systems, Society for Industrial and Applied Mathematics, 2020, 19 (1), pp.577–618. Differential equations with random parameters have gained significant prominence in recent years due to their importance in mathematical modelling and data assimilation. In many cases, random ordinary differential equations (RODEs) are studied by using Monte-Carlo methods or by direct numerical simulation techniques using polynomial chaos (PC), i.e., by a series expansion of the random parameters in combination with forward integration. Here we take a dynamical systems viewpoint and focus on the invariant sets of differential equations such as steady states, stable/unstable manifolds, periodic orbits, and heteroclinic orbits. We employ PC to compute representations of all these different types of invariant sets for RODEs. This allows us to obtain fast sampling, geometric visualization of distributional properties of invariants sets, and uncertainty quantification of dynamical output such as periods or locations of orbits. We apply our techniques to a predator-prey model, where we compute steady states and stable/unstable manifolds. We also include several benchmarks to illustrate the numerical efficiency of adaptively chosen PC depending upon the random input. Then we employ the methods for the Lorenz system, obtaining computational PC representations of periodic orbits, stable/unstable manifolds and heteroclinic orbits. (10.1137/18M1235818)
    DOI : 10.1137/18M1235818