Programmes
Evénements
Articles
Transition
Demographic transition
Environmental transition
Financial transition
Digital transition
The numerical toolbox of market finance traditionally rests on three pillars: approximate formulas, deterministic numerical schemes, and simulation methods. The former include Fourier transform formulas in affine jump diffusion models, or implied volatility asymptotics in the SABR model. The latter are similar to difference or finite element techniques for PDEs in finance. The third are based on the Feynman-Kac type probabilistic representation of the solutions of these PDEs, or (in nonlinear) of the probabilistic formulation of these PDEs as backward stochastic differential equations (BDSDEs).
This Chair project concerns statistical learning methods on simulated data in finance. Learning is then conceived, not as a way of modelling from data (since the data are simulated within a predefined model or class of models), but as a fourth term in the previous toolbox. Quantitative finance indeed offers a vast field of application for statistical learning techniques, implemented on simulated data. This is again a well-established tradition, originating in the numerical simulation/regression schemes for pricing Bermuda options à la Longstaff and Schwarz, which have since been considerably extended to EDSRs. Nevertheless, we can speak of a recent technological breakthrough in this field, which has been disrupted by the flood of learning techniques.
Such an evolution can be explained by the meeting of the now practically accessible character of such techniques in terms of computing power, and an evolution of the paradigm of derivatives management, following the 2008-09 crisis, from a replication framework to a capital and collateral optimisation framework, going hand in hand with a growing tendency towards trading automation (through platforms as soon as possible).
The Chair is situated at this meeting point between the increased computational needs of investment banks, following the increase in regulation, and machine learning techniques. Banks are thus subject to an increasing number of risk measurement calculations. They are also required to calculate various XVA metrics, i.e. valuation adjustments to account for counterparty risk and its consequences in terms of capital and collateral costs. These calculations must be carried out at different levels of aggregation: the bank’s netting sets (client portfolios), the broader level of funding sets for financing cost calculations, and even the level of the bank’s balance sheet as a whole for certain economic cost and cost of capital calculations.
Beyond the calculation challenges posed by the evolution of the regulation, it also raises numerous and legitimate questions in terms of models (in the usual pricing and risk sense, but also balance sheet modelling) and model risk.
We will also look at other possible applications of machine learning in finance, this time involving historical data (as opposed to simulated data above). The challenges are multiple and difficult: non-stationarity of financial data, problems of large dimensions, size of data (often limited) and missing data, extremes and dependence in the tails of distributions (in connection for example with risk measures).
Main research areas
Activity report 2022 (FR)
Activity report 2023 (ENG)