Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2020

  • On the convergence of stochastic approximations under a subgeometric ergodic Markov dynamic
    • Debavelaere Vianney
    • Durrleman Stanley
    • Allassonnière Stéphanie
    , 2020. IIn this paper, we extend the framework of the convergence ofstochastic approximations. Such a procedure is used in many methods such as parameters estimation inside a Metropolis Hastings algorithm, stochastic gradient descent or stochastic Expectation Maximization algorithm. It is given by θ n+1 = θn + ∆ n+1 H θn (X n+1) , where (Xn)n∈N is a sequence of random variables following a parametric distribution which depends on (θn)n∈N, and (∆n)n∈N is a step sequence. The convergence of such a stochastic approximation has already been proved under an assumption of geometric ergodicity of the Markov dynamic. However, in many practical situations this hypothesis is not satisfied, for instance for any heavy tail target distribution in a Monte Carlo Metropolis Hastings algorithm. In this paper, we relax this hypothesis and prove the convergence of the stochastic approximation by only assuming a subgeometric ergodicity of the Markov dynamic. This result opens up the possibility to derive more generic algorithms with proven convergence. As an example, we first study an adaptive Markov Chain Monte Carlo algorithm where the proposal distribution is adapted by learning the variance of a heavy tail target distribution. We then apply our work to the Independent Component Analysis when a positive heavy tail noise leads to a subgeometric dynamic in an Expectation Maximization algorithm. (10.1214/154957804100000000)
    DOI : 10.1214/154957804100000000
  • Availability of the Molecular Switch XylR Controls Phenotypic Heterogeneity and Lag Duration during Escherichia coli Adaptation from Glucose to Xylose
    • Barthe Manon
    • Tchouanti Josué
    • Gomes Pedro Henrique
    • Bideaux Carine
    • Lestrade Delphine
    • Graham Carl
    • Steyer Jean-Philippe
    • Meleard Sylvie
    • Harmand Jérôme
    • Gorret Nathalie
    • Cocaign‐bousquet Muriel
    • Enjalbert Brice
    mBio, American Society for Microbiology, 2020, 11 (6). The glucose-xylose metabolic transition is of growing interest as a modelto explore cellular adaption since these molecules are the main substrates resultingfrom the deconstruction of lignocellulosic biomass. Here, we investigated the role ofthe XylR transcription factor in the length of the lag phases when the bacteriumEscherichia colineeds to adapt from glucose- to xylose-based growth. First, a variety oflag times were observed when different strains ofE. coliwere switched from glucoseto xylose. These lag times were shown to be controlled by XylR availability in the cellswith no further effect on the growth rate on xylose. XylR titration provoked long lagtimes demonstrated to result from phenotypic heterogeneity during the switch fromglucose to xylose, with a subpopulation unable to resume exponential growth, whereasthe other subpopulation grew exponentially on xylose. A stochastic model was thenconstructed based on the assumption that XylR availability influences the probability ofindividual cells to switch to xylose growth. The model was used to understand howXylR behaves as a molecular switch determining the bistability set-up. This work showsthat the length of lag phases inE. coliis controllable and reinforces the role of stochas-tic mechanism in cellular adaptation, paving the way for new strategies for the betteruse of sustainable carbon sources in bioeconomy. (10.1128/mBio.02938-20)
    DOI : 10.1128/mBio.02938-20
  • Shape reconstruction of deposits inside a steam generator using eddy current measurements
    • Girardon Hugo
    , 2020. Non-destructive testing is an essential tool to assess the safety of the facilities within nuclear power plants. In particular, conductive deposits on U-tubes in steam generators constitute a safety issue as they may block the cooling loop. To detect these deposits, eddy-current probes are introduced inside the U-tubes to generate currents and measuring back an impedance signal. We develop a shape optimization technique with regularized gradient descent to invert these measurements and recover the deposit shape. To deal with the unknown geometry, and its possibly complex topological nature, we propose to model it using a level set function.The methodology is first validated on synthetic axisymmetric configurations and fast convergence is ensured by careful adaptation of the gradient steps and regularization parameters. Using the actual domain, from which the acquisitions are made, we then consider a more realistic modeling that incorporates the support plate and the presence of imperfections on the tube interior section. We employ in particular an asymptotic model to take into account these imperfections and treat them as additional unknowns in our inverse problem. A multi-objective optimization strategy, based on the use of different operating frequencies, is then developed to solve this problem. We present various numerical examples with synthetic and experimental data showing the viability of our approach.The focus is then placed on the transposition of the 2D-axisymmetric work to more generic 3D configurations. Solving Maxwell eddy-current equations in 3D raises modeling issues related to the choice of the problem formulation as well as high computational costs that need to be reduced before discussing the reconstruction algorithm. Using the knowledge acquired with 2D-axisymmetric reconstruction, an efficient inversion strategy is then proposed and implemented on 3D synthetic data. Validating numerical examples demonstrate the feasibility of the inversion even for large data at a relatively moderate cost and with good accuracy and robustness with respect to noise and modeling errors.
  • To quarantine, or not to quarantine: A theoretical framework for disease control via contact tracing
    • Lunz Davin
    • Batt Gregory
    • Ruess Jakob
    Epidemics, Elsevier, 2020, 34. Contact tracing via smartphone applications is expected to be of major importance for maintaining control of the COVID-19 pandemic. However, viable deployment demands a minimal quarantine burden on the general public. That is, consideration must be given to unnecessary quarantining imposed by a contact tracing policy. Previous studies have modeled the role of contact tracing, but have not addressed how to balance these two competing needs. We propose a modeling framework that captures contact heterogeneity. This allows contact prioritization: contacts are only notified if they were acutely exposed to individuals who eventually tested positive. The framework thus allows us to address the delicate balance of preventing disease spread while minimizing the social and economic burdens of quarantine. This optimal contact tracing strategy is studied as a function of limitations in testing resources, partial technology adoption, and other intervention methods such as social distancing and lockdown measures. The framework is globally applicable, as the distribution describing contact heterogeneity is directly adaptable to any digital tracing implementation. (10.1016/j.epidem.2020.100428)
    DOI : 10.1016/j.epidem.2020.100428
  • Tropical Dynamic Programming for Lipschitz Multistage Stochastic Programming
    • Akian Marianne
    • Chancelier Jean-Philippe
    • Tran Benoît
    , 2020. We present an algorithm called Tropical Dynamic Programming (TDP) which builds upper and lower approximations of the Bellman value functions in risk-neutral Multistage Stochastic Programming (MSP), with independent noises of finite supports. To tackle the curse of dimensionality, popular parametric variants of Approximate Dynamic Programming approximate the Bellman value function as linear combinations of basis functions. Here, Tropical Dynamic Programming builds upper (resp. lower) approximations of a given value function as min-plus linear (resp. max-plus linear) combinations of "basic functions". At each iteration, TDP adds a new basic function to the current combination following a deterministic criterion introduced by Baucke, Downward and Zackeri in 2018 for a variant of Stochastic Dual Dynamic Programming. We prove, for every Lipschitz MSP, the asymptotic convergence of the generated approximating functions of TDP to the Bellman value functions on sets of interest. We illustrate this result on MSP with linear dynamics and polyhedral costs.
  • The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
    • Martin Alice
    • Ollion Charles
    • Strub Florian
    • Le Corff Sylvain
    • Pietquin Olivier
    , 2020. This paper introduces the Sequential Monte Carlo Transformer, an original approach that naturally captures the observations distribution in a transformer architecture. The keys, queries, values and attention vectors of the network are considered as the unobserved stochastic states of its hidden structure. This generative model is such that at each time step the received observation is a random function of its past states in a given attention window. In this general state-space setting, we use Sequential Monte Carlo methods to approximate the posterior distributions of the states given the observations, and to estimate the gradient of the log-likelihood. We hence propose a generative model giving a predictive distribution, instead of a single-point estimate.
  • Topology optimization of connections in mechanical systems
    • Rakotondrainibe Lalaina
    , 2020. Topology optimization is commonly used for mechanical parts. It usually involves a single part and connections to other parts are assumed to be fixed. This thesis proposes an other approach of topology optimization in which connections are design variables, as well as the structure. We focus on standard long bolt with prestressed state. This connection model is idealized to be enough representative but computationally cheap. The idealized model is complemented with mechanical constraints specific to the bolt.The problem is to optimize concurrently the topology and the geometry of a structure, on the one hand, and the locations and the number of bolts, on the other hand. The elastic structure is represented by a level-set function and is optimized with Hadamard's boundary variation method. The locations are optimized using a parametric gradient-based algorithm. The concept of topological derivative is adapted to add a small idealized bolt at the best location with the optimal orientation, and thus optimizes the number of bolts. This coupled topology optimization (shape and connections) is illustrated with 2d and 3d academic test cases. It is then applied on a simplified industrial test case. The coupling provides more satisfactory performance of a part than shape optimization with fixed connections. The approach presented in this work is therefore one step closer to the optimization of assembled systems.
  • Coupling structural optimization and trajectory optimization methods in additive manufacturing
    • Boissier Mathilde
    , 2020. This work investigates path planning optimization for powder bed fusion additive manufacturing processes, and relates them to the design of the built part. The state of the art mainly studies trajectories based on existing patterns and, besides their mechanical evaluation, their relevance has not been related to the object’s shape. We propose in this work a systematic approach to optimize the path without any a priori restriction. The typical optimization problem is to melt the desired structure, without over-heating (to avoid thermally induced residual stresses) and possibly with a minimal path length. The state equation is the heat equation with a source term depending on the scanning path. Two physical 2-d models are proposed, involving temperature constraint: a transient and a steady state one (in which time dependence is removed). Based on shape optimization for the steady state model and control for the transient model, path optimization algorithms are developed. Numerical results are then performed allowing a critical assessment of the choices we made. To increase the path design freedom, we modify the steady state algorithm to introduce path splits. Two methods are compared. In the first one, the source power is added to the optimization variables and an algorithm mixing relaxation-penalization techniques and the control of the total variation is set. In a second method, notion of topological derivative are applied to the path to cleverly remove and add pieces. eventually, in the steady state, we conduct a concurrent optimization of the part’s shape and of the scanning path. This multiphysics optimization problem raises perspectives gathering direct applications and future generalizations.
  • Coupling structural optimization and trajectory optimization methods in additive manufacturing
    • Boissier Mathilde
    , 2020. This work investigates path planning optimization for powder bed fusion additive manufacturing processes, and relates them to the design of the built part. The state of the art mainly studies trajectories based on existing patterns and, besides their mechanical evaluation, their relevance has not been related to the object’s shape. We propose in this work a systematic approach to optimize the path without any a priori restriction. The typical optimization problem is to melt the desired structure, without over- heating (to avoid thermally induced residual stresses) and possibly with a minimal path length. The state equation is the heat equation with a source term depending on the scanning path. Two physical 2-d models are proposed, involving temperature constraint: a transient and a steady state one (in which time dependence is removed). Based on shape optimization for the steady state model and control for the transient model, path optimization algorithms are developed. Numerical results are then performed allowing a critical assessment of the choices we made. To increase the path design freedom, we modify the steady state algorithm to introduce path splits. Two methods are compared. In the first one, the source power is added to the optimization variables and an algorithm mixing relaxation penalization techniques and the control of the total variation is set. In a second method, notion of topological derivative are applied to the path to cleverly remove and add pieces. eventually, in the steady state, we conduct a concurrent optimization of the part’s shape and of the scanning path. This multiphysics optimization problem raises perspectives gathering direct applications and future generalizations.
  • Classical Rayleigh-Jeans condensation of light waves: Observation and thermodynamic characterization
    • Baudin Kilian
    • Fusaro Adrian
    • Krupa Katarzyna
    • Garnier Josselin
    • Rica Sergio
    • Millot Guy
    • Picozzi Antonio
    Physical Review Letters, American Physical Society, 2020, 125, pp.244101. Theoretical studies on wave turbulence predict that a purely classical system of random waves can exhibit a process of condensation, which originates in the singularity of the Rayleigh-Jeans equilibrium distribution. We report the experimental observation of the transition to condensation of classical optical waves propagating in a multimode fiber, i.e., in a conservative Hamiltonian system without thermal heat bath. In contrast to conventional self-organization processes featured by the nonequilibrium formation of nonlinear coherent structures (solitons, vortices,…), here the self-organization originates in the equilibrium Rayleigh-Jeans statistics of classical waves. The experimental results show that the chemical potential reaches the lowest energy level at the transition to condensation, which leads to the macroscopic population of the fundamental mode of the optical fiber. The near-field and far-field measurements of the condensate fraction across the transition to condensation are in quantitative agreement with the Rayleigh-Jeans theory. The thermodynamics of classical wave condensation reveals that the heat capacity takes a constant value in the condensed state and tends to vanish above the transition in the normal state. Our experiments provide the first demonstration of a coherent phenomenon of self-organization that is exclusively driven by optical thermalization toward the Rayleigh-Jeans equilibrium. (10.1103/PhysRevLett.125.244101)
    DOI : 10.1103/PhysRevLett.125.244101
  • Le cadran de la visibilité de la sphere médiatique et politique sur YouTube
    • Benbouzid Bilel
    • Gauthier Emma
    • Ramaciotti Pedro
    • Roudier Bertrand
    • Venturini Tommaso
    , 2020.
  • Toward a new fully algebraic preconditioner for symmetric positive definite problems
    • Spillane Nicole
    , 2020. A new domain decomposition preconditioner is introduced for efficiently solving linear systems Ax = b with a symmetric positive definite matrix A. The particularity of the new preconditioner is that it is not necessary to have access to the so-called Neumann matrices (i.e.: the matrices that result from assembling the variational problem underlying A restricted to each subdomain). All the components in the preconditioner can be computed with the knowledge only of A (and this is the meaning given here to the word algebraic). The new preconditioner relies on the GenEO coarse space for a matrix that is a low-rank modification of A and on the Woodbury matrix identity. The idea underlying the new preconditioner is introduced here for the first time with a first version of the preconditioner. Some numerical illustrations are presented. A more extensive presentation including some improved variants of the new preconditioner can be found in [7] (https://hal.archives-ouvertes.fr/hal-03258644).
  • Debiasing Stochastic Gradient Descent to handle missing values
    • Sportisse Aude
    • Boyer Claire
    • Dieuleveut Aymeric
    • Josse Julie
    , 2020. Stochastic gradient algorithm is a key ingredient of many machine learning methods, particularly appropriate for large-scale learning.However, a major caveat of large data is their incompleteness.We propose an averaged stochastic gradient algorithm handling missing values in linear models. This approach has the merit to be free from the need of any data distribution modeling and to account for heterogeneous missing proportion.In both streaming and finite-sample settings, we prove that this algorithm achieves convergence rate of $\mathcal{O}(\frac{1}{n})$ at the iteration $n$, the same as without missing values. We show the convergence behavior and the relevance of the algorithm not only on synthetic data but also on real data sets, including those collected from medical register.
  • Hard Shape-Constrained Kernel Machines
    • Aubin-Frankowski Pierre-Cyril
    • Szabó Zoltán
    , 2020. Shape constraints (such as non-negativity, monotonicity, convexity) play a central role in a large number of applications, as they usually improve performance for small sample size and help interpretability. However enforcing these shape requirements in a hard fashion is an extremely challenging problem. Classically, this task is tackled (i) in a soft way (without out-of-sample guarantees), (ii) by specialized transformation of the variables on a case-by-case basis, or (iii) by using highly restricted function classes, such as polynomials or polynomial splines. In this paper, we prove that hard affine shape constraints on function derivatives can be encoded in kernel machines which represent one of the most flexible and powerful tools in machine learning and statistics. Particularly, we present a tightened second-order cone constrained reformulation, that can be readily implemented in convex solvers. We prove performance guarantees on the solution, and demonstrate the efficiency of the approach in joint quantile regression with applications to economics and to the analysis of aircraft trajectories, among others.
  • A Stochastic Path-Integrated Differential EstimatoR Expectation Maximization Algorithm
    • Fort Gersende
    • Moulines Eric
    • Wai Hoi-To
    , 2020. The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations. This paper introduces a novel EM algorithm, called SPIDER-EM, for inference from a training set of size n, n ≫ 1. At the core of our algorithm is an estimator of the full conditional expectation in the E-step, adapted from the stochastic path-integrated differential estimator (SPIDER) technique. We derive finite-time complexity bounds for smooth non-convex likelihood: we show that for convergence to an ǫ-approximate stationary point, the complexity scales as K Opt (n, ǫ) = O(ǫ −1) and K CE (n, ǫ) = n + √ nO(ǫ −1), where K Opt (n, ǫ) and K CE (n, ǫ) are respectively the number of M-steps and the number of per-sample conditional expectations evaluations. This improves over the state-of-the-art algorithms. Numerical results support our findings.
  • NeuMiss networks: differentiable programming for supervised learning with missing values
    • Le Morvan Marine
    • Josse Julie
    • Moreau Thomas
    • Scornet Erwan
    • Varoquaux Gaël
    , 2020. The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann-series approximation of the optimal predictor, we propose a new principled architecture, named NeuMiss networks. Their originality and strength come from the use of a new type of non-linearity: the multiplication by the missingness indicator. We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns. As a result they scale well to problems with many features, and remain statistically efficient for medium-sized samples. Moreover, we show that, contrary to procedures using EM or imputation, they are robust to the missing data mechanism, including difficult MNAR settings such as self-masking.
  • CO-Optimal Transport
    • Redko Ievgen
    • Vayer Titouan
    • Flamary Rémi
    • Courty Nicolas
    , 2020. Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions. Yet, its original formulation relies on the existence of a cost function between the samples of the two distributions, which makes it impractical for comparing data distributions supported on different topological spaces. To circumvent this limitation, we propose a novel OT problem, named COOT for CO-Optimal Transport, that aims to simultaneously optimize two transport maps between both samples and features. This is different from other approaches that either discard the individual features by focussing on pairwise distances (e.g. Gromov-Wasserstein) or need to model explicitly the relations between the features. COOT leads to interpretable correspondences between both samples and feature representations and holds metric properties. We provide a thorough theoretical analysis of our framework and establish rich connections with the Gromov-Wasserstein distance. We demonstrate its versatility with two machine learning applications in heterogeneous domain adaptation and co-clustering/data summarization, where COOT leads to performance improvements over the competing state-of-the-art methods.
  • Topology Optimization for Orthotropic Supports in Additive Manufacturing
    • Godoy Matías
    , 2023.
  • Latent group structure and regularized regression
    • Perrakis Konstantinos
    • Lartigue Thomas
    • Dondelinger Frank
    • Mukherjee Sach
    , 2020. Regression models generally assume that the conditional distribution of response Y given features X is the same for all samples. For heterogeneous data with distributional differences among latent groups, standard regression models are ill-equipped, especially in large multivariate problems where hidden heterogeneity can easily pass undetected. To allow for robust and interpretable regression modeling in this setting we propose a class of regularized mixture models that couples together both the multivariate distribution of X and the conditional Y | X. This joint modeling approach offers a novel way to deal with suspected distributional shifts, which allows for automatic control of confounding by latent group structure and delivers scalable, sparse solutions. Estimation is handled via an expectation-maximization algorithm, whose convergence is established theoretically. We illustrate the key ideas via empirical examples.
  • Pseudo-Darwinian evolution of physical flows in complex networks
    • Berthelot Geoffroy C.B.
    • Tupikina Liubov
    • Kang Min-Yeong
    • Sapoval Bernard
    • Grebenkov Denis S
    Scientific Reports, Nature Publishing Group, 2020, 10 (1). The evolution of complex transport networks is investigated under three strategies of link removal: random, intentional attack and “Pseudo-Darwinian” strategy. At each evolution step and regarding the selected strategy, one removes either a randomly chosen link, or the link carrying the strongest flux, or the link with the weakest flux, respectively. We study how the network structure and the total flux between randomly chosen source and drain nodes evolve. We discover a universal power-law decrease of the total flux, followed by an abrupt transport collapse. The time of collapse is shown to be determined by the average number of links per node in the initial network, highlighting the importance of this network property for ensuring safe and robust transport against random failures, intentional attacks and maintenance cost optimizations. (10.1038/s41598-020-72379-8)
    DOI : 10.1038/s41598-020-72379-8
  • Benchmarking large-scale continuous optimizers: the bbob-largescale testbed, a COCO software guide and beyond
    • Varelas Konstantinos
    • Ait El Hara Ouassim
    • Brockhoff Dimo
    • Hansen Nikolaus
    • Nguyen Duc Manh
    • Tušar Tea
    • Auger Anne
    Applied Soft Computing, Elsevier, 2020, 97 (A), pp.106737. Benchmarking of optimization solvers is an important and compulsory task for performance assessment that in turn can help in improving the design of algorithms. It is a repetitive and tedious task. Yet, this task has been greatly automatized in the past ten years with the development of the Comparing Continuous Optimizers platform (COCO). In this context, this paper presents a new testbed, called bbob-largescale, that contains functions ranging from dimension 20 to 640, compatible with and extending the well-known single-objective noiseless bbob test suite to larger dimensions. The test suite contains 24 single-objective functions in continuous domain, built to model well-known difficulties in continuous optimization and to test the scaling behavior of algorithms. To reduce the computational demand of the orthogonal search space transformations that appear in the bbob test suite, while retaining some desired properties, we use permuted block diagonal orthogonal matrices. The paper discusses implementation technicalities and presents a guide for using the test suite within the COCO platform and for interpreting the postprocessed output. The source code of the new test suite is available on GitHub as part of the open source COCO benchmarking platform. (10.1016/j.asoc.2020.106737)
    DOI : 10.1016/j.asoc.2020.106737
  • Asymptotics of the eigenvalues for exponentially parameterized pentadiagonal matrices
    • Tavakolipour Hanieh
    • Shakeri Fatemeh
    Numerical Linear Algebra with Applications, Wiley, 2020, 27 (6). Let P(t) be an n × n (complex) exponentially parameterized pentadiagonal matrix. In this article, using a theorem of Akian, Bapat, and Gaubert, we present explicit formulas for asymptotics of the moduli of the eigenvalues of P(t) as t → ∞. Our approach is based on exploiting the relation with tropical algebra and the weighted digraphs of matrices. We prove that this asymptotics tends to a unique limit or two limits. Also, for n − 2 largest magnitude eigenvalues of P(t) we compute the asymptotics as n → ∞, in addition to t. When P(t) is also symmetric, these formulas allow us to compute the asymptotics of the 2‐norm condition number. The number of arithmetic operations involved, does not depend on n. We illustrate our results by some numerical tests. (10.1002/nla.2330)
    DOI : 10.1002/nla.2330
  • A Universal Approximation Result for Difference of log-sum-exp Neural Networks
    • Calafiore Giuseppe
    • Gaubert Stéphane
    • Possieri Corrado
    IEEE Transactions on Neural Networks and Learning Systems, IEEE, 2020, 31 (12), pp.5603-5612. We show that a neural network whose output is obtained as the difference of the outputs of two feedforward networks with exponential activation function in the hidden layer and logarithmic activation function in the output node (LSE networks) is a smooth universal approximator of continuous functions over convex, compact sets. By using a logarithmic transform, this class of networks maps to a family of subtraction-free ratios of generalized posynomials, which we also show to be universal approximators of positive functions over log-convex, compact subsets of the positive orthant. The main advantage of Difference-LSE networks with respect to classical feedforward neural networks is that, after a standard training phase, they provide surrogate models for design that possess a specific difference-of-convex-functions form, which makes them optimizable via relatively efficient numerical methods. In particular, by adapting an existing difference-of-convex algorithm to these models, we obtain an algorithm for performing effective optimization-based design. We illustrate the proposed approach by applying it to data-driven design of a diet for a patient with type-2 diabetes. (10.1109/TNNLS.2020.2975051)
    DOI : 10.1109/TNNLS.2020.2975051
  • Irreducibility and geometric ergodicity of Hamiltonian Monte Carlo
    • Durmus Alain
    • Moulines Éric
    • Saksman Eero
    Annals of Statistics, Institute of Mathematical Statistics, 2020, 48 (6). (10.1214/19-AOS1941)
    DOI : 10.1214/19-AOS1941
  • Weak approximations and VIX option prices expansions in rough forward variances models
    • Bourgey Florian
    • Gobet Emmanuel
    • de Marco Stefano
    , 2020.