JAY Emmanuelle

< Back to ILB Patrimony
Topics of productions
Affiliations
  • 2001 - 2002
    Université de Cergy Pontoise
  • 2013
  • 2002
  • Multi-factor models and signal processing techniques: application to quantitative finance.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    2013
    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices. This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an interesting alternative to the selection of factors (both fundamentals and statistical factors) and can provide more efficient estimation procedures, based on lq regularized Kalman filtering for instance. With numerous illustrative examples from stock markets, this book meets the needs of both finance practitioners and graduate students in science, econometrics and finance.
  • A Regularized Kalman Filter (rgKF) for Spiky Data.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter presents a new family of algorithms named regularized Kalman Filters (rgKFs) that have been derived to detect and estimate exogenous outliers that might occur in the observation equation of a standard Kalman filter (KF). Inspired from the robust Kalman filter (RKF) of Mattingley and Boyd, which makes use of a l1-regularization step, the authors introduce a simple but efficient detection step in the recursive equations of the RKF. This solution is one means by which to solve the problem of adapting the value of the l1-regularization parameter: when an outlier is detected in the innovation term of the KF, the value of the regularization parameter is set to a value that will let the l1-based optimization problem estimate the amplitude of the spike. The chapter deals with the application of algorithm to detect irregularities in hedge fund returns.
  • Least Squares Estimation (LSE) and Kalman Filtering (KF) for Factor Modeling:A Geometrical Perspective.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter introduces, illustrates and derives both least squares estimation (LSE) and Kalman filter (KF) estimation of the alpha and betas of a return, for a given number of factors that have already been selected. It formalizes the “per return factor model” and the concept of recursive estimate of the alpha and betas. The chapter explains the setup, objective, criterion, interpretation, and derivations of LSE. The setup, main properties, objective, interpretation, practice, and geometrical derivation of KF are also discussed. The chapter also explains the working of LSE and KF. Numerous simulation results are displayed and commented throughout the chapter to illustrate the behaviors, performance and limitations of LSE and KF.
  • Factor Selection.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter focuses on the empirical ad hoc approach and presents three reference models that are widely used in the literature. These models are all based on the factor representation, but highlight the nature of the factors to be used to explain specific asset class returns. In a section, the authors denote by eigenfactors the factors obtained from the observations using the eigenvector decomposition of the covariance matrix of the returns. The chapter describes some classical techniques, arising from the information theory. It provides complementary sections which provide some light on related problems to this approach such as the estimation of the covariance matrix of the data, the similarity of the approach with subspace methods and the extension of this approach to large panel data.
  • Factor Models and General Definition.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter introduces the common version of linear factor models and also discusses its limits and developments. It introduces different notations and discusses the model and its structure. The chapter lists out the reasons why factor models are generally used in finance, and further explains the limits of this approach. It also deals with the different steps in the building of factor models, i.e. factor selection and parameter estimation. Finally, the chapter gives a historical perspective on the use of factor models such as capital asset pricing model (CAPM), Sharpe's market model and arbitrage pricing theory (APT) in finance.
  • Detection in non-Gaussian environment.

    Emmanuelle JAY, Patrick DUVAUT
    2002
    The radar echoes coming from the various reflections of the emitted signal on the elements of the environment (the clutter) have long been modeled by Gaussian vectors. The optimal detection procedure was then summarized in the implementation of the classical matched filter. With the technological evolution of radar systems, the real nature of the clutter has been shown to be no longer Gaussian. Although the optimality of the matched filter is challenged in such cases, TFAC (Constant False Alarm Rate) techniques have been proposed for this detector, in order to adapt the detection threshold value to the multiple local variations of the clutter. In spite of their diversity, these techniques proved to be neither robust nor optimal in these situations. From the modeling of the clutter by complex non-Gaussian processes, such as Spherically Invariant Random Processes (SIRP), optimal coherent detection structures have been determined. These models include many non-Gaussian laws, such as the K-distribution or the Weibull law, and are recognized in the literature to model in a relevant way many experimental situations. In order to identify the law of their characteristic component which is the texture, without any statistical preconception on the model, we propose, in this thesis, to approach the problem by a Bayesian approach. Two new methods of estimating the law of the texture are proposed: the first is a parametric method, based on a Padé approximation of the moment generating function, and the second is a Monte Carlo estimation. These estimates are performed on reference clutter data and lead to two new optimal detection strategies, respectively named PEOD (Padé Estimated Optimum Detector) and BORD (Bayesian Optimum Radar Detector). The asymptotic expression of the BORD (convergence in law), called the "Asymptotic BORD", is established as well as its law. This last result gives access to the optimal theoretical performance of the Asymptotic BORD which also applies to the BORD in the case where the correlation matrix of the data is non-singular. The detection performances of the BORD and the Asymptotic BORD are evaluated on experimental soil clutter data. The results obtained validate both the relevance of the SIRP model for the clutter and the optimality and adaptability of the BORD to any type of environment.
Affiliations are detected from the signatures of publications identified in scanR. An author can therefore appear to be affiliated with several structures or supervisors according to these signatures. The dates displayed correspond only to the dates of the publications found. For more information, see https://scanr.enseignementsup-recherche.gouv.fr