WARIN Xavier

< Back to ILB Patrimony
Topics of productions
Affiliations
  • 2018 - 2021
    Electricité de France
  • 2016 - 2021
    Edf r & d
  • 2018 - 2021
    Centre de recherche en économie et statistique de l'Ensae et l'Ensai
  • 2021
  • 2020
  • 2019
  • 2018
  • 2017
  • Neural networks-based algorithms for stochastic control and PDEs in finance *.

    Maximilien GERMAIN, Huyen PHAM, Xavier WARIN
    2021
    This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative pricing in financial engineering. We survey recent results in the literature, present new developments, notably in the fully nonlinear case, and compare the different schemes illustrated by numerical tests on various financial applications. We conclude by highlighting some future research directions.
  • Discretization and machine learning approximation of BSDEs with a constraint on the Gains-process.

    Idris KHARROUBI, Thomas LIM, Xavier WARIN
    Monte Carlo Methods and Applications | 2021
    We study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments. Mathematics Subject Classification (2010): 65C30, 65M75, 60H35, 93E20, 49L25.
  • DeepSets and their derivative networks for solving symmetric PDEs *.

    Maximilien GERMAIN, Mathieu LAURIERE, Huyen PHAM, Xavier WARIN
    2021
    Machine learning methods for solving nonlinear partial differential equations (PDEs) are hot topical issues, and different algorithms proposed in the literature show efficient numerical approximation in high dimension. In this paper, we introduce a class of PDEs that are invariant to permutations, and called symmetric PDEs. Such problems are widespread, ranging from cosmology to quantum mechanics, and option pricing/hedging in multi-asset market with exchangeable payoff. Our main application comes actually from the particles approximation of mean-field control problems. We design deep learning algorithms based on certain types of neural networks, named PointNet and DeepSet (and their associated derivative networks), for computing simultaneously an approximation of the solution and its gradient to symmetric PDEs. We illustrate the performance and accuracy of the PointNet/DeepSet networks compared to classical feedforward ones, and provide several numerical results of our algorithm for the examples of a mean-field systemic risk, mean-variance problem and a min/max linear quadratic McKean-Vlasov control problem.
  • Rate of convergence for particles approximation of PDEs in Wasserstein space *.

    Maximilien GERMAIN, Huyen PHAM, Xavier WARIN
    2021
    We prove a rate of convergence of order 1/N for the N-particle approximation of a second-order partial differential equation in the space of probability measures, like the Master equation or Bellman equation of mean-field control problem under common noise. The proof relies on backward stochastic differential equations techniques.
  • Fast multivariate empirical cumulative distribution function with connection to kernel density estimation.

    Nicolas LANGRENE, Xavier WARIN
    2020
    This paper revisits the problem of computing empirical cumulative distribution functions (ECDF) efficiently on large, multivariate datasets. Computing an ECDF at one evaluation point requires O(N) operations on a dataset composed of N data points. Therefore, a direct evaluation of ECDFs at N evaluation points requires a quadratic O(N^2) operations, which is prohibitive for large-scale problems. Two fast and exact methods are proposed and compared. The first one is based on fast summation in lexicographical order, with a O(N logN) complexity and requires the evaluation points to lie on a regular grid. The second one is based on the divide-and-conquer principle, with a O(N log(N)^max(d−1,1)) complexity and requires the evaluation points to coincide with the input points. The two fast algorithms are described and detailed in the general d-dimensional case, and numerical experiments validate their speed and accuracy. Secondly, the paper establishes a direct connection between cumulative distribution functions and kernel density estimation (KDE) for a large class of kernels. This connection paves the way for fast exact algorithms for multivariate kernel density estimation and kernel regression. Numerical tests with the Laplacian kernel validate the speed and accuracy of the proposed algorithms. A broad range of large-scale multivariate density estimation, cumulative distribution estimation, survival function estimation and regression problems can benefit from the proposed numerical methods.
  • Discretization and Machine Learning Approximation of BSDEs with a Constraint on the Gains-Process.

    Idris KHARROUBI, Thomas LIM, Xavier WARIN
    2020
    We study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments. Mathematics Subject Classification (2010): 65C30, 65M75, 60H35, 93E20, 49L25.
  • Deep backward schemes for high-dimensional nonlinear PDEs.

    Come HURE, Huyen PHAM, Xavier WARIN
    Mathematics of Computation | 2020
    We propose new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs). Relying on the classical backward stochastic differential equation (BSDE) representation of PDEs, our algorithms estimate simultaneously the solution and its gradient by deep neural networks. These approximations are performed at each time step from the minimization of loss functions defined recursively by backward induction. The methodology is extended to variational inequalities arising in optimal stopping problems. We analyze the convergence of the deep learning schemes and provide error estimates in terms of the universal approximation of neural networks. Numerical results show that our algorithms give very good results till dimension 50 (and certainly above), for both PDEs and variational inequalities problems. For the PDEs resolution, our results are very similar to those obtained by the recent method in \cite{weinan2017deep} when the latter converges to the right solution or does not diverge. Numerical tests indicate that the proposed methods are not stuck in poor local minima as it can be the case with the algorithm designed in \cite{weinan2017deep}, and no divergence is experienced. The only limitation seems to be due to the inability of the considered deep neural networks to represent a solution with a too complex structure in high dimension.
  • Deep backward multistep schemes for nonlinear PDEs and approximation error analysis.

    Maximilien GERMAIN, Huyen PHAM, Xavier WARIN
    2020
    We develop multistep machine learning schemes for solving nonlinear partial differential equations (PDEs) in high dimension. The method is based on probabilistic representation of PDEs by backward stochastic differential equations (BSDEs) and its iterated time discretization. In the case of semilinear PDEs, our algorithm estimates simultaneously by backward induction the solution and its gradient by neural networks through sequential minimizations of suitable quadratic loss functions that are performed by stochastic gradient descent. The approach is extended to the more challenging case of fully nonlinear PDEs, and we propose different approximations of the Hessian of the solution to the PDE, i.e., the $\Gamma$-component of the BSDE, by combining Malliavin weights and neural networks. Extensive numerical tests are carried out with various examples of semilinear PDEs including viscous Burgers equation and examples of fully nonlinear PDEs like Hamilton-Jacobi-Bellman equations arising in portfolio selection problems with stochastic volatilities, or Monge-Ampère equations in dimension up to 15. The performance and accuracy of our numerical results are compared with some other recent machine learning algorithms in the literature, see \cite{HJE17}, \cite{HPW19}, \cite{BEJ19}, \cite{BBCJN19} and \cite{phawar19}. Furthermore, we provide a rigorous approximation error analysis of the deep backward multistep scheme as well as the deep splitting method for semilinear PDEs, which yields convergence rate in terms of the number of neurons for shallow neural networks.
  • Option valuation and hedging using an asymmetric risk function: asymptotic optimality through fully nonlinear partial differential equations.

    Emmanuel GOBET, Isaque PIMENTEL, Xavier WARIN
    Finance and Stochastics | 2020
    Discrete time hedging produces a residual risk, namely, the tracking error. The major problem is to get valuation/hedging policies minimizing this error. We evaluate the risk between trading dates through a function penalizing asymmetrically profits and losses. After deriving the asymptotics within a discrete time risk measurement for a large number of trading dates, we derive the optimal strategies minimizing the asymptotic risk in the continuous time setting. We characterize the optimality through a class of fully nonlinear Partial Differential Equations (PDE). Numerical experiments show that the optimal strategies associated with discrete and asymptotic approach coincides asymptotically.
  • A power plant valuation under an asymmetric risk criterion taking into account maintenance costs.

    Clemence ALASSEUR, Emmanuel GOBET, Isaque PIMENTEL, Xavier WARIN
    2019
    Power producers are interested in valuing their power plant production. By trading into forward contracts, we propose to reduce the contingency of the associated income considering the fixed costs and using an asymmetric risk criterion. In an asymptotic framework, we provide an optimal hedging strategy through a solution of a nonlinear partial differential equation. As a numerical experiment, we analyze the impact of the fixed costs structure on the hedging policy and the value of the assets.
  • Regression Monte Carlo for microgrid management.

    Clemence ALASSEUR, Alessandro BALATA, Sahar BEN AZIZA, Aditya MAHESHWARI, Peter TANKOV, Xavier WARIN
    ESAIM: Proceedings and Surveys | 2019
    No summary available.
  • Numerical resolution of McKean-Vlasov FBSDEs using neural networks *.

    Maximilien GERMAIN, Joseph MIKAEL, Xavier WARIN
    2019
    We propose several algorithms to solve McKean-Vlasov Forward Backward Stochastic Differential Equations (FBSDEs). Our schemes rely on the approximating power of neural networks to estimate the solution or its gradient through minimization problems. As a consequence, we obtain methods able to tackle both mean-field games and mean-field control problems in moderate dimension. We analyze the numerical behavior of our algorithms on several examples including non linear quadratic models.
  • Numerical approximation of general Lipschitz BSDEs with branching processes.

    Bruno BOUCHARD, Xiaolu TAN, Xavier WARIN
    ESAIM: Proceedings and Surveys | 2019
    We extend the branching process based numerical algorithm of Bouchard et al. [3], that is dedicated to semilinear PDEs (or BSDEs) with Lipschitz nonlinearity, to the case where the nonlinearity involves the gradient of the solution. As in [3], this requires a localization procedure that uses a priori estimates on the true solution, so as to ensure the well-posedness of the involved Picard iteration scheme, and the global convergence of the algorithm. When, the nonlinearity depends on the gradient, the later needs to be controlled as well. This is done by using a face-lifting procedure. Convergence of our algorithm is proved without any limitation on the time horizon. We also provide numerical simulations to illustrate the performance of the algorithm.
  • Neural networks-based backward scheme for fully nonlinear PDEs.

    Huyen PHAM, Xavier WARIN
    2019
    We propose a numerical method for solving high dimensional fully nonlinear partial differential equations (PDEs). Our algorithm estimates simultaneously by backward time induction the solution and its gradient by multi-layer neural networks, through a sequence of learning problems obtained from the minimization of suitable quadratic loss functions and training simulations. This methodology extends to the fully non-linear case the approach recently proposed in [HPW19] for semi-linear PDEs. Numerical tests illustrate the performance and accuracy of our method on several examples in high dimension with nonlinearity on the Hessian term including a linear quadratic control problem with control on the diffusion coefficient.
  • Fast and Stable Multivariate Kernel Density Estimation by Fast Sum Updating.

    Nicolas LANGRENE, Xavier WARIN
    Journal of Computational and Graphical Statistics | 2019
    Kernel density estimation and kernel regression are powerful but computationally expensive techniques: a direct evaluation of kernel density estimates at M evaluation points given N input sample points requires a quadratic O(M N) operations, which is prohibitive for large scale problems. For this reason, approximate methods such as binning with Fast Fourier Transform or the Fast Gauss Transform have been proposed to speed up kernel density estimation. Among these fast methods, the Fast Sum Updating approach is an attractive alternative, as it is an exact method and its speed is independent of the input sample and the bandwidth. Unfortunately, this method, based on data sorting, has for the most part been limited to the univariate case. In this paper, we revisit the fast sum updating approach and extend it in several ways. Our main contribution is to extend it to the general multivariate case for general input data and rectilinear evaluation grid. Other contributions include its extension to a wider class of kernels, including the triangular, cosine and Silverman kernels, its combination with parsimonious additive multivariate kernels, and its combination with a fast approximate k-nearest-neighbors bandwidth for multivariate datasets. Our numerical tests of multivariate regression and density estimation confirm the speed, accuracy and stability of the method. We hope this paper will renew interest for the fast sum updating approach and help solve large-scale practical density estimation and regression problems.
  • Some machine learning schemes for high-dimensional nonlinear PDEs.

    Come HURE, Huyen PHAM, Xavier WARIN
    2019
    We propose new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs). Relying on the classical backward stochastic differential equation (BSDE) representation of PDEs, our algorithms estimate simultaneously the solution and its gradient by deep neural networks. These approximations are performed at each time step from the minimization of loss functions defined recursively by backward induction. The methodology is extended to variational inequalities arising in optimal stopping problems. We analyze the convergence of the deep learning schemes and provide error estimates in terms of the universal approximation of neural networks. Numerical results show that our algorithms give very good results till dimension 50 (and certainly above), for both PDEs and variational inequalities problems. For the PDEs resolution, our results are very similar to those obtained by the recent method in \cite{weinan2017deep} when the latter converges to the right solution or does not diverge. Numerical tests indicate that the proposed methods are not stuck in poor local minima as it can be the case with the algorithm designed in \cite{weinan2017deep}, and no divergence is experienced. The only limitation seems to be due to the inability of the considered deep neural networks to represent a solution with a too complex structure in high dimension.
  • Asymptotic optimal valuation with asymmetric risk and applications in finance.

    Isaque SANTA BRIGIDA PIMENTEL, Emmanuel GOBET, Mireille BOSSY, Emmanuel GOBET, Xavier WARIN, Nizar TOUZI, Frederic ABERGEL, Jean francois CHASSAGNEUX
    2018
    This thesis consists of two parts that can be read independently. In the first part of the thesis, we study hedging and option pricing problems related to a risk measure. Our main approach is the use of an asymmetric risk function and an asymptotic framework in which we obtain optimal solutions through nonlinear partial differential equations (PDEs).In the first chapter, we focus on the valuation and hedging of European options. We consider the problem of optimizing the residual risk generated by a discrete-time hedge in the presence of an asymmetric risk criterion. Instead of analyzing the asymptotic behavior of the solution of the associated discrete problem, we study the asymmetric residual risk measure integrated in a Markovian framework. In this context, we show the existence of this asymptotic risk measure. We then describe an asymptotically optimal hedging strategy via the solution of a totally nonlinear PDE. The second chapter applies this hedging method to the problem of valuing the output of a power plant. Since the power plant generates maintenance costs whether it is on or off, we are interested in reducing the risk associated with the uncertain revenues of this power plant by hedging with futures contracts. In the second part of the thesis, we consider several control problems related to economics and finance.The third chapter is dedicated to the study of a class of McKean-Vlasov (MKV) type problem with common noise, called conditional polynomial MKV. We reduce this polynomial class by Markov folding to finite dimensional control problems.We compare three different probabilistic techniques for numerically solving the reduced problem: quantization, control randomization regression, and delayed regression. We provide many numerical examples, such as portfolio selection with uncertainty about an underlying trend.In the fourth chapter, we solve dynamic programming equations associated with financial valuations in the energy market. We consider that a calibrated model for the underlyings is not available and that a small sample obtained from historical data is accessible.Moreover, in this context, we assume that futures contracts are often governed by hidden factors modeled by Markov processes. We propose a non-intrusive method to solve these equations through empirical regression techniques using only the historical log price of observable futures contracts.
  • Nesting Monte Carlo for high-dimensional non-linear PDEs.

    Xavier WARIN
    Monte Carlo Methods and Applications | 2018
    No summary available.
  • STochastic OPTimization library in C++.

    Hugo GEVRET, Nicolas LANGRENE, Jerome LELONG, Xavier WARIN, Aditya MAHESHWARI
    2018
    The STochastic OPTimization library (StOpt) aims at providing tools in C++ for solving some stochastic optimization problems encountered in finance or in the industry. A python binding is available for some C++ objects provided permitting to easily solve an optimization problem by regression. Different methods are available :
    • dynamic programming methods based on Monte Carlo with regressions (global, local and sparse regressors), for underlying states following some uncontrolled Stochastic Differential Equations (python binding provided).
    • Semi-Lagrangian methods for Hamilton Jacobi Bellman general equations for underlying states following some controlled Stochastic Differential Equations (C++ only)
    • Stochastic Dual Dynamic Programming methods to deal with stochastic stocks management problems in high dimension. A SDDP module in python is provided. To use this module, the transitional optimization problem has to written in C++ and mapped to python (examples provided).
    • Some methods are provided to solve by Monte Carlo some problems where the underlying stochastic state is controlled.
    • Some pure Monte Carlo Methods are proposed to solve some non linear PDEs
    For each method, a framework is provided to optimize the problem and then simulate it out of the sample using the optimal commands previously calculated. Parallelization methods based on OpenMP and MPI are provided in this framework permitting to solve high dimensional problems on clusters. The library should be flexible enough to be used at different levels depending on the user's willingness.
  • Option valuation and hedging using asymmetric risk function: asymptotic optimality through fully nonlinear Partial Differential Equations.

    Emmanuel GOBET, Isaque PIMENTEL, Xavier WARIN
    2018
    Discrete time hedging produces a residual risk, namely, the tracking error. The major problem is to get valuation/hedging policies minimizing this error. We evaluate the risk between trading dates through a function penalizing asymmetrically profits and losses. After deriving the asymptotics within a discrete time risk measurement for a large number of trading dates, we derive the optimal strategies minimizing the asymptotic risk in the continuous time setting. We characterize the optimality through a class of fully nonlinear Partial Differential Equations (PDE). Numerical experiments show that the optimal strategies associated with discrete and asymptotic approach coincides asymptotically.
  • Branching diffusion representation of semilinear PDEs and Monte Carlo approximation *.

    Pierre HENRY LABORDERE, Nadia OUDJANE, Xiaolu TAN, Nizar TOUZI, Xavier WARIN
    2017
    We provide a representation result of parabolic semi-linear PD-Es, with polynomial nonlinearity, by branching diffusion processes. We extend the classical representation for KPP equations, introduced by Skorokhod [23], Watanabe [27] and McKean [18], by allowing for polynomial nonlinearity in the pair (u, Du), where u is the solution of the PDE with space gradient Du. Similar to the previous literature, our result requires a non-explosion condition which restrict to " small maturity " or " small nonlinearity " of the PDE. Our main ingredient is the automatic differentiation technique as in [15], based on the Malliavin integration by parts, which allows to account for the nonlin-earities in the gradient. As a consequence, the particles of our branching diffusion are marked by the nature of the nonlinearity. This new representation has very important numerical implications as it is suitable for Monte Carlo simulation. Indeed, this provides the first numerical method for high dimensional nonlinear PDEs with error estimate induced by the dimension-free Central limit theorem. The complexity is also easily seen to be of the order of the squared dimension. The final section of this paper illustrates the efficiency of the algorithm by some high dimensional numerical experiments.
  • Numerical approximation of general Lipschitz BSDEs with branching processes.

    Bruno BOUCHARD, Xiaolu TAN, Xavier WARIN
    2017
    We extend the branching process based numerical algorithm of Bouchard et al. [3], that is dedicated to semilinear PDEs (or BSDEs) with Lipschitz nonlinearity, to the case where the nonlinearity involves the gradient of the solution. As in [3], this requires a localization procedure that uses a priori estimates on the true solution, so as to ensure the well-posedness of the involved Picard iteration scheme, and the global convergence of the algorithm. When, the nonlinearity depends on the gradient, the later needs to be controlled as well. This is done by using a face-lifting procedure. Convergence of our algorithm is proved without any limitation on the time horizon. We also provide numerical simulations to illustrate the performance of the algorithm.
Affiliations are detected from the signatures of publications identified in scanR. An author can therefore appear to be affiliated with several structures or supervisors according to these signatures. The dates displayed correspond only to the dates of the publications found. For more information, see https://scanr.enseignementsup-recherche.gouv.fr