Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2020

  • Modulation of homogeneous and isotropic turbulence by sub-Kolmogorov particles: Impact of particle field heterogeneity
    • Letournel Roxane
    • Laurent Frédérique
    • Massot Marc
    • Vié Aymeric
    International Journal of Multiphase Flow, Elsevier, 2020, 125, pp.103233. The modulation of turbulence by sub-Kolmogorov particles has been thoroughly characterized in the literature, showing either enhancement or reduction of kinetic energy at small or large scale depending on the Stokes number and the mass loading. However , the impact of a third parameter, the number density of particles, has not been independently investigated. In the present work, we perform direct numerical simulations of decaying Homogeneous Isotropic Turbulence loaded with monodisperse sub-Kolmogorov particles, varying independently the Stokes number, the mass loading and the number density of particles. Like previous investigators, crossover and modulations of the fluid energy spectra are observed consistently with the change in Stokes number and mass loading. Additionally, DNS results show a clear impact of the particle number density, promoting the energy at small scales while reducing the energy at large scales. For high particle number density, the turbulence statistics and spectra become insensitive to the increase of this parameter, presenting a two-way asymptotic behavior. Our investigation identifies the energy transfer mechanisms, and highlights the differences between the influence of a highly concentrated disperse phase (high particle number density, limit behavior) and that of heterogeneous concentration fields (low particle number density). In particular, a measure of this heterogeneity is proposed and discussed which allows to identify specific regimes in the evolution of turbulence statistics and spectra. (10.1016/j.ijmultiphaseflow.2020.103233)
    DOI : 10.1016/j.ijmultiphaseflow.2020.103233
  • Modèles Génératifs Profonds : l'échantillonnage en haute dimension revisité
    • Moulines Eric
    , 2020. Les modèles génératifs (GM) permettent d’inférer des modèles de loi pour des observations structurées de grande dimension, qui sont typiques de l'IA moderne. Les modèles génératifs peuvent également être utilisés pour échantillonner de nouveaux exemples, en reliant le problème d'inférence à l'échantillonnage.L'apprentissage de modèles génératifs profonds (MGD) capables de capturer les structures de dépendance complexe de lois à partir de grands ensemble de données dans un cadre non-ou semi-supervisé apparaît aujourd'hui comme l'un des principaux défis de l'IA. Les modèles génératifs profonds ont de nombreuses applications passionnantes pour résoudre la pénurie de données en générant de " nouveaux " exemples, pour préserver la confidentialité en diffusant le modèle génératif à laplace des données mais aussi pour détecter les observations aberrantes.Dans cette présentation, je vais couvrir trois directions de recherche sur lesquelles je travaille actuellement.Une première approche est basée sur la minimisation de l'entropie croisée (divergence de Kullback-Leibler) entre la distribution des observations et un modèle paramétré soit par des réseaux de neurones profonds, soit par des fonctions d’energies plus adaptées, reliant les modèles génératifs et les « energy based models » quiont été introduits pour l’apprentissage non-supervisé (mais dans un cadre non-probabiliste). Cette approche est séduisante mais elle pose des problèmes de calcul difficiles, liés à la nécessité d'estimer la constante de normalisation et son gradient.Une deuxième approche repose sur les méthodes d'entropie maximale. Cette approche trouve son origine dans les quantités de physique statistique pour apprendre une distribution maximisant l'entropie sous contrainte de moment, qui sont construites à partir d'unereprésentation issue d’un réseau de neurones profondsUne troisième approche consiste à utiliser des auto-encodeurs variationnels (Variational Autoencoder, VAE), un cas particulier d'inférence variationnelle. Les VAE apprennent conjointement un algorithme pour générer des échantillons à partir de la distribution ainsi qu'un espace latent qui résume la distribution des observations.J’illustrerai ces approches par des exemples et je discuterai des challenges théoriques et numériques que ces approches posent. <a href="https://videos-rennes.inria.fr/video/HJt6vEaXI" target="_blank">[Vidéo en ligne]</a>
  • Comparative study of harmonic and Rayleigh-Ritz procedures with applications to deflated conjugate gradients
    • Venkovic Nicolas
    • Mycek Paul
    • Giraud Luc
    • Le Maitre Olivier
    , 2020. Harmonic Rayleigh-Ritz and Raleigh-Ritz projection techniques are compared in the context of iterative procedures to solve for small numbers of least dominant eigenvectors of large symmetric positive definite matrices. The procedures considered are (i) locally optimal conjugate gradient (CG) methods, i.e., LOBCG, (ii) thick-restart Lanczos methods, and (iii) recycled linear CG solvers, e.g., eigCG. Approaches based on principles of local optimality are adapted to enable the use of harmonic projection techniques. Upon investigating the search spaces generated by these methods, it is found that LOBCG and thick-restart Lanczos methods can be adapted, which is not the case of eigCG. Explanations are also given as to why eigCG works so well in comparison to other recycling strategies. Numerical experiments show that, while approaches based on harmonic projections consistently result in a faster convergence of eigen-residuals, they generally do not yield better convergence of the forward error of eigenvectors, until the Rayleigh quotients have converged. Then, the effect of recycling strategies is investigated on deflation for the resolution of sequences of linear systems. While non-locally optimal recycling strategies need to solve more linear systems in order to fully develop their effect on convergence, they eventually reach similar behaviors to those of locally optimal recycling procedures. While implementations based on Init-CG are robust for systems with multiple right-hand sides, this is not the case for multiple operators.
  • Quantitative modeling links in vivo microstructural and macrofunctional organization of human and macaque insular cortex, and predicts cognitive control abilities
    • Menon Vinod
    • Gallardo Guillermo
    • Pinsk Mark
    • Nguyen Van-Dang
    • Li Jing-Rebecca
    • Cai Weidong
    • Wassermann Demian
    , 2020. (10.1101/662601)
    DOI : 10.1101/662601
  • Adaptive Bayesian SLOPE—High-dimensional Model Selection with Missing Values
    • Jiang Wei
    • Bogdan Malgorzata
    • Josse Julie
    • Miasojedow Blazej
    • Rockova Veronika
    , 2020. We consider the problem of variable selection in high-dimensional settings with missing observations among the covariates. To address this relatively understudied problem, we propose a new synergistic procedure -- adaptive Bayesian SLOPE -- which effectively combines the SLOPE method (sorted l1 regularization) together with the Spike-and-Slab LASSO method. We position our approach within a Bayesian framework which allows for simultaneous variable selection and parameter estimation, despite the missing values. As with the Spike-and-Slab LASSO, the coefficients are regarded as arising from a hierarchical model consisting of two groups: (1) the spike for the inactive and (2) the slab for the active. However, instead of assigning independent spike priors for each covariate, here we deploy a joint "SLOPE" spike prior which takes into account the ordering of coefficient magnitudes in order to control for false discoveries. Through extensive simulations, we demonstrate satisfactory performance in terms of power, FDR and estimation bias under a wide range of scenarios. Finally, we analyze a real dataset consisting of patients from Paris hospitals who underwent a severe trauma, where we show excellent performance in predicting platelet levels. Our methodology has been implemented in C++ and wrapped into an R package ABSLOPE for public use.
  • Computing bi-tangents for transmission belts
    • Chouly Franz
    • Loubani Jinan
    • Lozinski Alexei
    • Méjri Bochra
    • Merito Kamil
    • Passos Sébastien
    • Pineda Angie
    , 2020. In this note, we determine the bi-tangents of two rotated ellipses, and we compute the coordinates of their points of tangency. For these purposes, we develop two approaches. The first one is an analytical approach in which we compute analytically the equations of the bi-tangents. This approach is valid only for some cases. The second one is geometrical and is based on the determination of the normal vector to the tangent line. This approach turns out to be more robust than the first one and is valid for any configuration of ellipses.
  • Nonparametric imputation by data depth
    • Mozharovskyi Pavlo
    • Josse Julie
    • Husson François
    Journal of the American Statistical Association, Taylor & Francis, 2020, 115 (529), pp.241-253. The presented methodology for single imputation of missing values borrows the idea from data depth --- a measure of centrality defined for an arbitrary point of the space with respect to a probability distribution or a data cloud. This consists in iterative maximization of the depth of each observation with missing values, and can be employed with any properly defined statistical depth function. On each single iteration, imputation is narrowed down to optimization of quadratic, linear, or quasiconcave function being solved analytically, by linear programming, or the Nelder-Mead method, respectively. Being able to grasp the underlying data topology, the procedure is distribution free, allows to impute close to the data, preserves prediction possibilities different to local imputation methods (k-nearest neighbors, random forest), and has attractive robustness and asymptotic properties under elliptical symmetry. It is shown that its particular case --- when using Mahalanobis depth --- has direct connection to well known treatments for multivariate normal model, such as iterated regression or regularized PCA. The methodology is extended to the multiple imputation for data stemming from an elliptically symmetric distribution. Simulation and real data studies positively contrast the procedure with existing popular alternatives. The method has been implemented as an R-package. (10.1080/01621459.2018.1543123)
    DOI : 10.1080/01621459.2018.1543123
  • Maximization of the Steklov Eigenvalues with a Diameter Constraint
    • Al Sayed Abdelkader
    • Bogosel Beniamin
    • Henrot Antoine
    • Nacry Florent
    SIAM Journal on Mathematical Analysis, Society for Industrial and Applied Mathematics, 2020, 53 (1), pp.710-729. In this paper, we address the problem of maximizing the Steklov eigenvalues with a diameter constraint. We provide an estimate of the Steklov eigenvalues for a convex domain in terms of its diameter and volume and we show the existence of an optimal convex domain. We establish that balls are never maximizers, even for the first non-trivial eigenvalue that contrasts with the case of volume or perimeter constraints. Under an additional regularity assumption, we are able to prove that the Steklov eigenvalue is multiple for the optimal domain. We illustrate our theoretical results by giving some optimal domains in the plane thanks to a numerical algorithm. (10.1137/20M1335042)
    DOI : 10.1137/20M1335042
  • Second order linear differential equations with analytic uncertainties: stochastic analysis via the computation of the probability density function
    • Jornet Marc
    • Calatayud Julia
    • Le Maitre Olivier
    • Cortés Juan Carlos
    Journal of Computational and Applied Mathematics, Elsevier, 2020, 374, pp.112770. This paper concerns the analysis of random second order linear differential equations. Usually, solving these equations consists of computing the first statistics of the response process, and that task has been an essential goal in the literature. A more ambitious objective is the computation of the solution probability density function. We present advances on these two aspects in the case of general random non-autonomous second order linear differential equations with analytic data processes. The Fröbenius method is employed to obtain the stochastic solution in the form of a mean square convergent power series. We demonstrate that the convergence requires the boundedness of the random input coefficients. Further, the mean square error of the Fröbenius method is proved to decrease exponentially with the number of terms in the series, although not uniformly in time. Regarding the probability density function of the solution at a given time, which is the focus of the paper, we rely on the law of total probability to express it in closed-form as an expectation. For the computation of this expectation, a sequence of approximating density functions is constructed by reducing the dimensionality of the problem using the truncated power series of the fundamental set. We prove several theoretical results regarding the pointwise convergence of the sequence of density functions and the convergence in total variation. The pointwise convergence turns out to be exponential under a Lipschitz hypothesis. As the density functions are expressed in terms of expectations, we propose a crude Monte Carlo sampling algorithm for their estimation. This algorithm is implemented and applied on several numerical examples designed to illustrate the theoretical findings of the paper. After that, the efficiency of the algorithm is improved by employing the control variates method. Numerical examples corroborate the variance reduction of the Monte Carlo approach. (10.1016/j.cam.2020.112770)
    DOI : 10.1016/j.cam.2020.112770
  • SPIX: a new software package to reveal chemical reactions at trace amounts in very complex mixtures from high-resolution mass spectra data sets
    • Nicol Edith
    • Xu Yao
    • Varga Zsuzsanna
    • Kinani Said
    • Bouchonnet Stéphane
    • Lavielle Marc
    Rapid Communications in Mass Spectrometry, Wiley, 2020. Rationale: High-resolution mass spectrometry-based non-targeted screening has a huge potential for applications in environmental sciences, engineering and regulation. However, it produces big data for which full appropriate processing is a real challenge; the development of processing software is the last building-block to enable large-scale use of this approach. Methods: A new software application, SPIX, has been developed to extract relevant information from high-resolution mass-spectrum datasets. Dealing with intrinsic sample variability and reducing operator subjectivity, it opens up opportunities and promising prospects in many areas of analytical chemistry. SPIX is freely available at: http://spix.webpopix.org. Results: Two features of the software are presented in the field of environmental analysis. An example illustrates how SPIX reveals photodegradation reactions in wastewater by fitting kinetic models to significant changes in ion abundance over time. A second example shows the ability of SPIX to detect photoproducts at trace amounts in river water, through comparison of datasets from samples taken before and after irradiation. Conclusions: SPIX has shown its ability to reveal relevant modifications between two series of big data sets, allowing for instance to study the consequences of a given event on a complex substrate. Most of alland this is to our knowledge the only software currently available allowing thatit can reveal and monitor any kind of reaction in all types of mixture.
  • Optimality conditions in variational form for non-linear constrained stochastic control problems
    • Pfeiffer Laurent
    Mathematical Control and Related Fields, AIMS, 2020, 10 (3), pp.493-526. Optimality conditions in the form of a variational inequality are proved for a class of constrained optimal control problems of stochastic differential equations. The cost function and the inequality constraints are functions of the probability distribution of the state variable at the final time. The analysis uses in an essential manner a convexity property of the set of reachable probability distributions. An augmented Lagrangian method based on the obtained optimality conditions is proposed and analyzed for solving iteratively the problem. At each iteration of the method, a standard stochastic optimal control problem is solved by dynamic programming. Two academical examples are investigated. (10.3934/mcrf.2020008)
    DOI : 10.3934/mcrf.2020008
  • High to Low pellet cladding gap heat transfer modeling methodology in an uncertainty quantification framework for a PWR Rod Ejection Accident with best estimate coupling
    • Delipei Gregory Kyriakos
    • Garnier Josselin
    • Le Pallec Jean-Charles
    • Normand Benoit
    EPJ N - Nuclear Sciences & Technologies, EDP Sciences, 2020, 6, pp.56. High to Low modeling approaches can alleviate the computationally expensive fuel modeling in nuclear reactor’s transient uncertainty quantification. This is especially the case for Rod Ejection Accident (REA) in Pressurized Water Reactors (PWR) were strong multi-physics interactions occur. In this work, we develop and propose a pellet cladding gap heat transfer (Hgap) High to Low modeling methodology for a PWR REA in an uncertainty quantification framework. The methodology involves the calibration of asimplified $Hgap$ model based on high fidelity simulations with the fuel-thermomechanics code ALCYONE1.The calibrated model is then introduced into the CEA developed CORPUS Best Estimate (BE) multi-physicscoupling between APOLLO3R© and FLICA4. This creates an Improved Best Estimate (IBE) coupling that is then used for an uncertainty quantification study. The results indicate that with IBE the distance to boiling crisis uncertainty is decreased from 57% to 42%. This is reflected to the decrease of the sensitivity of $Hgap$. In the BE coupling $Hgap$ was responsible for 50% of the output variance while in IBE it is close to 0. These results show the potential gain of High to Low approachez for $Hgap$ modeling in REA uncertainty analyses. (10.1051/epjn/2020018)
    DOI : 10.1051/epjn/2020018
  • Validation strategy of reduced-order two-fluid flow models based on a hierarchy of direct numerical simulations
    • Cordesse Pierre
    • Remigi Alberto
    • Duret Benjamin
    • Murrone Angelo
    • Ménard Thibaut
    • Demoulin François-Xavier
    • Massot Marc
    Flow, Turbulence and Combustion, Springer Verlag, 2020, 105 (4), pp.1381-1411. Whereas direct numerical simulation (DNS) have reached a high level of description in the field of atomization processes, they are not yet able to cope with industrial needs since they lack resolution and are too costly. Predictive simulations relying on reduced order modeling have become mandatory for applications ranging from cryotechnic to aeronautic combustion chamber liquid injection. Two-fluid models provide a good basis in order to conduct such simulations, even if recent advances allow to refine subscale modeling using geometrical variables in order to reach a unified model including separate phases and disperse phase descriptions based on high order moment methods. The simulation of such models has to rely on dedicated numerical methods and still lacks assessment of its predictive capabilities. The present paper constitutes a building block of the investigation of a hierarchy of test-cases designed to be amenable to DNS while close enough to industrial configurations, for which we propose a comparison of two-fluid compressible simulations with DNS data-bases. We focus in the present contribution on an air-assisted water atomization using a planar liquid sheet injector. Qualitative and quantitative comparisons with incompressible DNS allow us to identify and analyze strength and weaknesses of the reduced-order modeling and numerical approach in this specific configuration and set a framework for more refined models since they already provide a very interesting level of comparison on averaged quantities. (10.1007/s10494-020-00154-w)
    DOI : 10.1007/s10494-020-00154-w
  • Optimal Hedging Under Fast-Varying Stochastic Volatility
    • Garnier Josselin
    • Sølna Knut
    SIAM Journal on Financial Mathematics, Society for Industrial and Applied Mathematics, 2020, 11 (1), pp.274-325. In a market with a rough or Markovian mean-reverting stochastic volatility thereis no perfect hedge. Here it is shown how various delta-type hedging strategies perform and canbe evaluated in such markets in the case of European options.A precise characterization of thehedging cost, the replication cost caused by the volatilityfluctuations, is presented in an asymptoticregime of rapid mean reversion for the volatility fluctuations. The optimal dynamic asset basedhedging strategy in the considered regime is identified as the so-called “practitioners” delta hedgingscheme. It is moreover shown that the performances of the delta-type hedging schemes are essentiallyindependent of the regularity of the volatility paths in theconsidered regime and that the hedgingcosts are related to a Vega risk martingale whose magnitude is proportional to a new market riskparameter. It is also shown via numerical simulations that the proposed hedging schemes whichderive from option price approximations in the regime of rapid mean reversion, are robust: the“practitioners” delta hedging scheme that is identified as being optimal by our asymptotic analysiswhen the mean reversion time is small seems to be optimal witharbitrary mean reversion times. (10.1137/18M1221655)
    DOI : 10.1137/18M1221655
  • The mean field Schrödinger problem: ergodic behavior, entropy estimates and functional inequalities
    • Backhoff Julio
    • Conforti Giovanni
    • Gentil Ivan
    • Léonard Christian
    Probability Theory and Related Fields, Springer Verlag, 2020, 178, pp.475-530. (10.1007/s00440-020-00977-8)
    DOI : 10.1007/s00440-020-00977-8
  • Regression Monte Carlo methods for HJB-type equations: which approximation space?
    • Barrera David
    • Gobet Emmanuel
    • Lopez-Salas Jose
    • Turkedjiev Plamen
    • Vasquez Carlos
    • Zubelli Jorge
    , 2020.
  • Tropical planar networks
    • Gaubert Stéphane
    • Niv Adi
    Linear Algebra and its Applications, Elsevier, 2020, 595, pp.123-144. We show that every tropical totally positive matrix can be uniquely represented as the transfer matrix of a canonical totally connected weighted planar network. We deduce a uniqueness theorem for the factorization of a tropical totally positive in terms of elementary Jacobi matrices. (10.1016/j.laa.2020.02.019)
    DOI : 10.1016/j.laa.2020.02.019
  • A quantitative McDiarmid’s inequality for geometrically ergodic Markov chains
    • Havet Antoine
    • Lerasle Matthieu
    • Moulines Éric
    • Vernet Elodie
    Electronic Communications in Probability, Institute of Mathematical Statistics (IMS), 2020, 25. (10.1214/20-ECP286)
    DOI : 10.1214/20-ECP286
  • Kinetic derivation of diffuse-interface fluid models
    • Giovangigli Vincent
    Physical Review E, American Physical Society (APS), 2020, 102. We present a full derivation of capillary fluid equations from the kinetic theory of dense gases. These equations involve van der Waals' gradient energy, Korteweg's tensor, and Dunn and Serrin's heat flux as well as viscous and heat dissipative fluxes. Starting from macroscopic equations obtained from the kinetic theory of dense gases, we use a second-order expansion of the pair distribution function in order to derive the diffuse interface model. The capillary extra terms and the capillarity coefficient are then associated with intermolecular forces and the pair interaction potential. (10.1103/physreve.102.012110)
    DOI : 10.1103/physreve.102.012110
  • Commentaires sur le rapport de surveillance de culture du MON 810 en 2018. Paris, le 25 février 2020
    • Du Haut Conseil Des Biotechnologies Comité Scientifique
    • Angevin Frédérique
    • Bagnis Claude
    • Bar-Hen Avner
    • Barny Marie-Anne
    • Boireau Pascal
    • Brévault Thierry
    • Chauvel Bruno B.
    • Collonnier Cécile
    • Couvet Denis
    • Dassa Elie
    • Demeneix Barbara
    • Franche Claudine
    • Guerche Philippe
    • Guillemain Joël
    • Hernandez Raquet Guillermina
    • Khalife Jamal
    • Klonjkowski Bernard
    • Lavielle Marc
    • Le Corre Valérie
    • Lefèvre François
    • Lemaire Olivier
    • Lereclus Didier D.
    • Maximilien Rémy
    • Meurs Eliane
    • Naffakh Nadia
    • Négre Didier
    • Noyer Jean-Louis
    • Ochatt Sergio
    • Pages Jean-Christophe
    • Raynaud Xavier
    • Regnault-Roger Catherine
    • Renard Michel M.
    • Renault Tristan
    • Saindrenan Patrick
    • Simonet Pascal
    • Troadec Marie-Bérengère
    • Vaissière Bernard
    • de Verneuil Hubert
    • Vilotte Jean-Luc
    , 2020, pp.35 p.. Les analyses contenues dans le rapport de surveillance de Bayer Agriculture BVBA ne font apparaître aucun problème majeur associé à la culture de maïs MON 810 en 2018. Toutefois, le CS du HCB identifie encore certaines faiblesses et limites méthodologiques concernant la surveillance de la sensibilité des ravageurs ciblés à la toxine Cry1Ab, remettant en question les conclusions du rapport. Le HCB estime notamment que l’utilisation d’une dose diagnostic présente certaines limites pour la détection précoce de l’évolution de la résistance, tant dans son principe intrinsèque que dans sa mise en oeuvre par Bayer, et recommande une méthode alternative de type F2 screen permettant de déterminer la fréquence des allèles de résistance au sein d’une population de ravageurs cibles. Par ailleurs, le HCB formule des recommandations destinées à renforcer la mise en oeuvre des zones refuges pour prévenir ou retarder le développement de résistance à la toxine Cry1Ab chez les ravageurs ciblés. Concernant la surveillance générale, le CS du HCB relève un problème de pertinence méthodologique quant aux questions étudiées, avec des règles de décision arbitraires, des conclusions incorrectement justifiées et un possible biais associé au format d’enquête auprès du panel d’agriculteurs qui ont accepté de répondre au questionnaire. Enfin, le CS du HCB recommande que le rapport de surveillance considère la présence de téosinte dans des zones de culture du maïs MON 810 en Espagne et les risques potentiels associés à une éventuelle introgression de gènes de maïs MON 810 chez le téosinte.
  • Orlicz Random Fourier Features
    • Chamakh Linda
    • Gobet Emmanuel
    • Szabó Zoltán
    Journal of Machine Learning Research, Microtome Publishing, 2020, 21 (145), pp.1−37. Kernel techniques are among the most widely-applied and influential tools in machine learning with applications at virtually all areas of the field. To combine this expressive power with computational efficiency numerous randomized schemes have been proposed in the literature, among which probably random Fourier features (RFF) are the simplest and most popular. While RFFs were originally designed for the approximation of kernel values, recently they have been adapted to kernel derivatives, and hence to the solution of large-scale tasks involving function derivatives. Unfortunately, the understanding of the RFF scheme for the approximation of higher-order kernel derivatives is quite limited due to the challenging polynomial growing nature of the underlying function class in the empirical process. To tackle this difficulty, we establish a finite-sample deviation bound for a general class of polynomial-growth functions under α-exponential Orlicz condition on the distribution of the sample. Instantiating this result for RFFs, our finite-sample uniform guarantee implies a.s. convergence with tight rate for arbitrary kernel with α-exponential Orlicz spectrum and any order of derivative.
  • Computing invariant sets of random differential equations using polynomial chaos
    • Breden Maxime
    • Kuehn Christian
    SIAM Journal on Applied Dynamical Systems, Society for Industrial and Applied Mathematics, 2020, 19 (1), pp.577–618. Differential equations with random parameters have gained significant prominence in recent years due to their importance in mathematical modelling and data assimilation. In many cases, random ordinary differential equations (RODEs) are studied by using Monte-Carlo methods or by direct numerical simulation techniques using polynomial chaos (PC), i.e., by a series expansion of the random parameters in combination with forward integration. Here we take a dynamical systems viewpoint and focus on the invariant sets of differential equations such as steady states, stable/unstable manifolds, periodic orbits, and heteroclinic orbits. We employ PC to compute representations of all these different types of invariant sets for RODEs. This allows us to obtain fast sampling, geometric visualization of distributional properties of invariants sets, and uncertainty quantification of dynamical output such as periods or locations of orbits. We apply our techniques to a predator-prey model, where we compute steady states and stable/unstable manifolds. We also include several benchmarks to illustrate the numerical efficiency of adaptively chosen PC depending upon the random input. Then we employ the methods for the Lorenz system, obtaining computational PC representations of periodic orbits, stable/unstable manifolds and heteroclinic orbits. (10.1137/18M1235818)
    DOI : 10.1137/18M1235818
  • Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies
    • Akimoto Youhei
    • Hansen Nikolaus
    Evolutionary Computation, Massachusetts Institute of Technology Press (MIT Press), 2020, 28 (3), pp.405-435. We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of adaptive diagonal decoding (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix $D$ that expresses coordinate-wise variances of the sampling distribution in $DCD$ form. The diagonal matrix can learn a rescaling of the problem in the coordinates within linear number of function evaluations. Diagonal decoding can also exploit separability of the problem, but, crucially, does not compromise the performance on non-separable problems. The latter is accomplished by modulating the learning rate for the diagonal matrix based on the condition number of the underlying correlation matrix. dd-CMA-ES not only combines the advantages of default and separable CMA-ES, but may achieve overadditive speedup: it improves the performance, and even the scaling, of the better of default and separable CMA-ES on classes of non-separable test functions that reflect, arguably, a landscape feature commonly observed in practice. The paper makes two further secondary contributions: we introduce two different approaches to guarantee positive definiteness of the covariance matrix with active CMA, which is valuable in particular with large population size; we revise the default parameter setting in CMA-ES, proposing accelerated settings in particular for large dimension. All our contributions can be viewed as independent improvements of CMA-ES, yet they are also complementary and can be seamlessly combined. In numerical experiments with dd-CMA-ES up to dimension 5120, we observe remarkable improvements over the original covariance matrix adaptation on functions with coordinate-wise ill-conditioning. The improvement is observed also for large population sizes up to about dimension squared. (10.1162/evco_a_00260)
    DOI : 10.1162/evco_a_00260
  • Fluctuation theory in the Boltzmann--Grad limit
    • Bodineau Thierry
    • Gallagher Isabelle
    • Saint-Raymond Laure
    • Simonella Sergio
    Journal of Statistical Physics, Springer Verlag, 2020, 180, pp.873–895. We develop a rigorous theory of hard-sphere dynamics in the kinetic regime, away from thermal equilibrium. In the low density limit, the empirical density obeys a law of large numbers and the dynamics is governed by the Boltzmann equation. Deviations from this behaviour are described by dynamical correlations, which can be fully characterized for short times. This provides both a fluctuating Boltzmann equation and large deviation asymptotics.
  • Null space gradient flows for constrained optimization with applications to shape optimization
    • Feppon Florian
    • Allaire Grégoire
    • Dapogny Charles
    ESAIM: Control, Optimisation and Calculus of Variations, EDP Sciences, 2020, 26, pp.90. The purpose of this article is to introduce a gradient-flow algorithm for solving equality and inequality constrained optimization problems, which is particularly suited for shape optimization applications. We rely on a variant of the Ordinary Differential Equation (ODE) approach proposed by Yamashita (Math. Program. 18 (1980) 155–168) for equality constrained problems: the search direction is a combination of a null space step and a range space step, aiming to decrease the value of the minimized objective function and the violation of the constraints, respectively. Our first contribution is to propose an extension of this ODE approach to optimization problems featuring both equality and inequality constraints. In the literature, a common practice consists in reducing inequality constraints to equality constraints by the introduction of additional slack variables. Here, we rather solve their local combinatorial character by computing the projection of the gradient of the objective function onto the cone of feasible directions. This is achieved by solving a dual quadratic programming subproblem whose size equals the number of active or violated constraints. The solution to this problem allows to identify the inequality constraints to which the optimization trajectory should remain tangent. Our second contribution is a formulation of our gradient flow in the context of – infinite-dimensional – Hilbert spaces, and of even more general optimization sets such as sets of shapes, as it occurs in shape optimization within the framework of Hadamard’s boundary variation method. The cornerstone of this formulation is the classical operation of extension and regularization of shape derivatives. The numerical efficiency and ease of implementation of our algorithm are demonstrated on realistic shape optimization problems. (10.1051/cocv/2020015)
    DOI : 10.1051/cocv/2020015