DUVAUT Patrick

< Back to ILB Patrimony
Affiliations
  • 2018 - 2019
    Ecole nationale supérieure des mines de Paris
  • 2012 - 2014
    Equipes Traitement de l'Information et Systèmes
  • 2019
  • 2014
  • 2013
  • 2007
  • 2005
  • 2002
  • 2000
  • 1999
  • 1998
  • 1997
  • 1996
  • 1994
  • 1993
  • ValYooTrust: Trust and incentive platform for collaborative innovation.

    Laurent DUPONT, Eric SEUILLET, Patrick DUVAUT
    Les écosystèmes d'innovation. Regards croisés des acteurs clés | 2019
    No summary available.
  • Automated Detection Of Defects Signature in Pipelines Using Ultra Sonic Thickness Images.

    Clement FOUQUET, Aymeric HISTACE, Patrick DUVAUT
    Proceedings of 14th ECNDT conference | 2014
    No summary available.
  • Assists in the detection and recognition of structural defects in pipelines by automatic analysis of XtraSonic images.

    Clement FOUQUET, Patrick DUVAUT, Olivier ALATA, Patrick DUVAUT, Aymeric HISTACE, Frederic PRECIOSO, Michel PAINDAVOINE
    2014
    TRAPIL is a French company in charge of the operation and maintenance of hydrocarbon pipelines. The maintenance of buried pipelines requires the passage of scrapers equipped with ultrasonic probes that map the pipeline structure, which is then analyzed by hand in order to detect and identify the various defects that may appear or evolve.The objective of this thesis is to provide an algorithmic solution to accelerate and complete the work of analysts using modern methods of image and signal processing.Our approach follows the modus operandi of experts and is divided into three parts.First, we perform a detection of butt welds to separate the pipeline into the different tubes that compose it. The probe signals representing the circumference of the pipe are grouped and compressed in a short and long term mean comparison rupture detection, then the resulting signals are merged using a unique weighting allowing a major increase in the contrast between noise and weld, providing a detection and localization almost flawless.The pipes undergo then a first segmentation aiming at eliminating the greatest number of healthy pixels. Using histogram modeling of thickness values by an EM algorithm initialized for our problem, the algorithm follows a recursive principle comparable to split and merge methods to detect and isolate dangerous areas.Finally, the dangerous areas are identified using a random drill, learned from a large number of defect examples. This third part is focused on the study of different pattern recognition methods applied to our new problematic.Through these different steps, the solutions we have provided allow TRAPIL to save significant time on the most tedious tasks of the analysis process (e.g. 30% on weld detection) and offers them new business opportunities, for example the possibility to provide a pre-report to their customers in a few days while the manual analysis is performed that can take more than a month.
  • Automated Detection and Fine Segmentation of Defects Signature in Pipelines using US Thickness Images.

    Clement FOUQUET, Aymeric HISTACE, Leila MEZIOU, Patrick DUVAUT
    Proceedings of ICNDT 2013 | 2013
    This contribution introduces a robust and content oriented detector of interest zones for defect localization in oil pipeline intelligent inspection achieving good perfomances toward complexity ratio. The method self-processes the multidimensional data collected by a pipeline inspection device equipped with many ultrasonic sensors (up to 512). It introduces a new content oriented usage of the EM algorithm, adapted to fit the very peculiar nature of the data to first isolate candidates zone, followed with a segmentation step to both get fine contours of defects and reject false alarms. Obtained performances in terms of specificity and sensibility show that the proposed approach is compatible with a routine utilization by specialists.
  • Co-operative Alien Noise Cancellation in Upstream VDSL: A New Decision Directed Approach.

    Pravesh BIYANI, Amitkumar MAHADEVAN, Shankar PRAKRIYA, Patrick DUVAUT, Surendra PRASAD
    IEEE Transactions on Communications | 2013
    Alien noise in the vectored very-high-speed digital subscriber line (VDSL) system is part of the additive noise at the receiver and exhibits strong correlation among users. We present a per-tone co-operative alien noise cancellation (CoMAC) algorithm for the upstream (US) VDSL that can be applied subsequent to any self far-end-crosstalk (FEXT) mitigation strategy. CoMAC operates by predicting the noise seen by a given user based on the error samples from the remaining users. These errors are conveniently obtained after slicing the self-FEXT canceled signal of all the vectored users. We show that if the estimation of these errors is accurate, the proposed alien canceler achieves the Cramer-Rao lower bound (CRLB). In practice, the seamless rate adaptation (SRA) operation, which enables increased bit rate by increasing the bit-loading per-tone, can cause decision errors in any decision directed strategy. We also analyze the impact of these decision errors - an issue not addressed in the literature. We propose a strategy for bit-loading during the SRA operation by formulating a max-min optimization problem and demonstrate a possibility of a guaranteed (minimum) improvement in the per-user rate. Simulations indicate that performance of the algorithm can exceed the minimum value significantly in practical situations.
  • Region-based approximation to solve inference in loopy factor graphs : decoding LDPC codes by the Generalized Belief Propagation.

    Jean christophe SIBEL, David DECLERCQ, Brigitte VALLEE, Sylvain REYNAL, Bane VASIC, Patrick DUVAUT, Charly POULLIAT
    2013
    In this thesis, we study the problem of Bayesian inference in factor graphs, in particular LDPC codes, which are almost solved by message-passing algorithms. In particular, we carry out an in-depth study of Belief Propagation (BP), whose suboptimality is raised in the case where the factor graph has loops. Starting from the equivalence between BP and the Bethe approximation in statistical physics which is generalized to the region-based approximation, we detail the Generalized Belief Propagation (GBP), a message-passing algorithm between clusters of the factor graph. We show through experiments that GBP outperforms BP in cases where clustering is performed according to the harmful topological structures that prevent BP from decoding well, namely trapping sets. Beyond the study of the performance in terms of error rate, we confront the two algorithms with respect to their dynamics in the face of non-trivial error events, in particular when they exhibit chaotic behavior. Through classical and original estimators, we show that the GBP algorithm can dominate the BP algorithm.
  • Multi-factor models and signal processing techniques: application to quantitative finance.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    2013
    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices. This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an interesting alternative to the selection of factors (both fundamentals and statistical factors) and can provide more efficient estimation procedures, based on lq regularized Kalman filtering for instance. With numerous illustrative examples from stock markets, this book meets the needs of both finance practitioners and graduate students in science, econometrics and finance.
  • A Regularized Kalman Filter (rgKF) for Spiky Data.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter presents a new family of algorithms named regularized Kalman Filters (rgKFs) that have been derived to detect and estimate exogenous outliers that might occur in the observation equation of a standard Kalman filter (KF). Inspired from the robust Kalman filter (RKF) of Mattingley and Boyd, which makes use of a l1-regularization step, the authors introduce a simple but efficient detection step in the recursive equations of the RKF. This solution is one means by which to solve the problem of adapting the value of the l1-regularization parameter: when an outlier is detected in the innovation term of the KF, the value of the regularization parameter is set to a value that will let the l1-based optimization problem estimate the amplitude of the spike. The chapter deals with the application of algorithm to detect irregularities in hedge fund returns.
  • Least Squares Estimation (LSE) and Kalman Filtering (KF) for Factor Modeling:A Geometrical Perspective.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter introduces, illustrates and derives both least squares estimation (LSE) and Kalman filter (KF) estimation of the alpha and betas of a return, for a given number of factors that have already been selected. It formalizes the “per return factor model” and the concept of recursive estimate of the alpha and betas. The chapter explains the setup, objective, criterion, interpretation, and derivations of LSE. The setup, main properties, objective, interpretation, practice, and geometrical derivation of KF are also discussed. The chapter also explains the working of LSE and KF. Numerous simulation results are displayed and commented throughout the chapter to illustrate the behaviors, performance and limitations of LSE and KF.
  • Factor Selection.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter focuses on the empirical ad hoc approach and presents three reference models that are widely used in the literature. These models are all based on the factor representation, but highlight the nature of the factors to be used to explain specific asset class returns. In a section, the authors denote by eigenfactors the factors obtained from the observations using the eigenvector decomposition of the covariance matrix of the returns. The chapter describes some classical techniques, arising from the information theory. It provides complementary sections which provide some light on related problems to this approach such as the estimation of the covariance matrix of the data, the similarity of the approach with subspace methods and the extension of this approach to large panel data.
  • Factor Models and General Definition.

    Serge DAROLLES, Patrick DUVAUT, Emmanuelle JAY
    Multi-Factor Models and Signal Processing Techniques | 2013
    This chapter introduces the common version of linear factor models and also discusses its limits and developments. It introduces different notations and discusses the model and its structure. The chapter lists out the reasons why factor models are generally used in finance, and further explains the limits of this approach. It also deals with the different steps in the building of factor models, i.e. factor selection and parameter estimation. Finally, the chapter gives a historical perspective on the use of factor models such as capital asset pricing model (CAPM), Sharpe's market model and arbitrage pricing theory (APT) in finance.
  • Derivation and optimization of turbo convolutional coding schemes for OFDM and DMT modulations.

    Julien PONS, Patrick DUVAUT
    2007
    The research work presented in this thesis focuses on the study, design and optimization of partially turbo-coded modulation schemes to improve the performance of broadband wireless and wireline digital communication systems based on OFDM and DMT technologies. More specifically, we focus on the search for coding methods that improve the trade-off between performance, complexity, flexibility and backward compatibility with Wi-Fi, WiMAX and DSL standards. Our first attempt aims at improving wireline systems (DSL) via the introduction of an original multi-level coding scheme, called hierarchical trellis coded modulation (HTCM), based on the hierarchical protection of three non-binary levels: the first level using turbo-code and the two remaining levels using trellis coded modulation (TCM). Although it can significantly improve (sometimes by no more than one decibel) the coding gain of a TCM scheme of equivalent complexity, the HTCM structure is not well suited for applications using an external Reed-Solomon (RS) code, such as in DSL. As an alternative, we suggest a scheme formed by the serial concatenation of an RS code and a two-level turbo-coded modulation (TuCM) protecting the first 24-ary level with a turbo-code and leaving the second level unprotected. A thorough optimization of the TuCM core shows that a structure employing the turbo-code of WiMAX systems can achieve a coding gain of 7dB for a bit error rate of 10-7, considering a codeword formed by about 900 subcarriers. A modification of this last structure for wireless applications consists in not using an external RS code and in protecting the second TuCM level with a convolutional code. For example, we propose a structure that combines the convolutional and turbo codes of WiMAX systems, and show that this structure improves the performance/complexity tradeoff of standardized Wi-Fi and WiMAX solutions. The design and optimization of our coding schemes have led to the development of original tools, such as new theoretical bounds on the error rate of multilevel coding schemes, and a new algorithm for estimating the free distance of a turbo code. Finally, we propose a method, called self-protection, to improve the error packet correction capability of a multi-carrier system originally designed to correct isolated errors. The technique effectively combines several classical concepts such as SNR margining, erasure decoding, and a new form of channel interleaving. This method can significantly reduce the latency of more traditional techniques (e.g., channel interleaving or RS coding).
  • Near End Crosstalk (NEXT) Canceller and Modulation Clustering for Asymmetric Digital Subscriber Line (ADSL) systems.

    Laurent PIERRUGUES, Patrick DUVAUT
    2005
    The work of this thesis aims at improving the range and throughput offered on the downstream channel of an ADSL transmission. The first part of the thesis details the ADSL transmission chain, its propagation channel and the crosstalk noise. The second part focuses on a solution to reduce the impact of NEXT noise at the subscriber. To this end, a second "noise-only reference" sensor is introduced at the receiver and a subcarrier algorithm maximizes the SNR. A statistical study confirms that this system significantly increases the performance. The third part introduces the "clustering" modulation which allows to improve the performances on long lines whatever the noise environment is. This modulation groups the tones, which do not have a sufficient SNR to accommodate a "minimal" constellation, into "clusters" carrying the same information. By this means, the unused capacity of a DMT symbol is reduced by nearly 75%.
  • Detection in non-Gaussian environment.

    Emmanuelle JAY, Patrick DUVAUT
    2002
    The radar echoes coming from the various reflections of the emitted signal on the elements of the environment (the clutter) have long been modeled by Gaussian vectors. The optimal detection procedure was then summarized in the implementation of the classical matched filter. With the technological evolution of radar systems, the real nature of the clutter has been shown to be no longer Gaussian. Although the optimality of the matched filter is challenged in such cases, TFAC (Constant False Alarm Rate) techniques have been proposed for this detector, in order to adapt the detection threshold value to the multiple local variations of the clutter. In spite of their diversity, these techniques proved to be neither robust nor optimal in these situations. From the modeling of the clutter by complex non-Gaussian processes, such as Spherically Invariant Random Processes (SIRP), optimal coherent detection structures have been determined. These models include many non-Gaussian laws, such as the K-distribution or the Weibull law, and are recognized in the literature to model in a relevant way many experimental situations. In order to identify the law of their characteristic component which is the texture, without any statistical preconception on the model, we propose, in this thesis, to approach the problem by a Bayesian approach. Two new methods of estimating the law of the texture are proposed: the first is a parametric method, based on a Padé approximation of the moment generating function, and the second is a Monte Carlo estimation. These estimates are performed on reference clutter data and lead to two new optimal detection strategies, respectively named PEOD (Padé Estimated Optimum Detector) and BORD (Bayesian Optimum Radar Detector). The asymptotic expression of the BORD (convergence in law), called the "Asymptotic BORD", is established as well as its law. This last result gives access to the optimal theoretical performance of the Asymptotic BORD which also applies to the BORD in the case where the correlation matrix of the data is non-singular. The detection performances of the BORD and the Asymptotic BORD are evaluated on experimental soil clutter data. The results obtained validate both the relevance of the SIRP model for the clutter and the optimality and adaptability of the BORD to any type of environment.
  • Contribution to the improvement of acoustic echo cancellation systems.

    Frederic BERTHAULT, Patrick DUVAUT
    2000
    The need for acoustic echo cancellation processing arises when a sound recording picks up a disturbing echo from a loudspeaker broadcasting a sound signal in a room or a vehicle interior. Many books cover the different algorithmic aspects of this discipline. In addition, the different algorithmic options have been widely studied and the evolution of processors now allows the realization of efficient echo cancellers. Since the beginning of the 90's, new applications and the will to realize stereophonic systems restoring a natural communication in videoconference (reproduction of the speakers' position) have oriented the research towards the extension of monophonic processing to the stereophonic case. The new difficulties encountered are essentially due to the sound recording model of this type of application, which generates highly correlated signals whose consequences have been widely exposed. During this thesis, two research axes have been followed: first, to contribute to the improvement of stereophonic echo cancellation methods applied to videoconferencing systems, with a particular interest in the extension of frequency adaptive filtering algorithms to the stereophonic case. This study has opened the way to a new modified frequency algorithm that deals with the correlation of input channels and whose application to videoconferencing systems seems promising. Secondly, to study the implementation of a stereophonic echo cancellation system to deal with the echo of a car radio disturbing a voice recognition system on board a vehicle. This application has only been studied a few times, and the results of the work carried out in this field have notably allowed the realization of a demonstrator, realized for a car manufacturer, allowing the vocal control of the functions of a car radio.
  • Estimation of GPS signal delays in the presence of multipath.

    Jerome SOUBIELLE, Patrick DUVAUT
    1999
    Gps is a satellite positioning system developed by the United States in the 1960s. Originally designed for military purposes, this system is now present in many civil applications (aerial radio navigation, maritime, cell phone positioning). Even if the system is very successful because of its diverse capabilities, it remains a source of errors (positioning imprecisions) due to the presence of secondary paths in certain configurations where the direct signal can reflect on the earth's surface, buildings, etc. The objective of the thesis is to remedy the positioning errors generated by these multipaths. The originality of the work done is to present in a first step the usual positioning techniques in the context without multipath. The studies published on the problems of delay estimation of gps signals usually present the methods (algorithm, architecture,) of resolution without justifying them. This paper aims at showing that the choices made are based on a maximum likelihood study and that the selected estimator is, in fact, optimal in the case without multipath. Moreover, we show the limitations (measurement imprecisions) of this estimator in the multipath case. In a second step, we analyze the physical properties of the reflected signals thanks to the experimental records of our industrial collaborators (thomson-csf detexis), in order to model the reflections as well as possible and to know the laws governing such physical phenomena. Finally, from the previous studies and especially based on the characterization of multipaths, the objective is to design an estimation method robust to both the absence and the presence of these secondary paths. The choice was made to focus on a maximum likelihood study (dedicated to the multipath context and therefore based on a multipath model) to obtain the optimal estimator in the studied context. It has been shown that this allows to keep a certain continuity of the treatments (usually used in the absence of multipath) and thus to provide a simple and easy to implement (patented) architecture for the set of treatments performed for positioning in the presence of multipath.
  • MCCM methods for Bayesian analysis of nonlinear parametric regression models. Application to line analysis and impulse deconvolution.

    Christophe ANDRIEU, Patrick DUVAUT
    1998
    In this thesis, the general linear regression problem, which involves non-linear parameters, is addressed in a Bayesian framework. This approach allows theorizing the difficult problems of nonlinear parameter estimation as well as the choice of the model allowing a parsimonious representation of the observed signal. The effective implementation of Bayesian statistics requires the use of numerical procedures. The procedures used in this work are monte carlo methods by markov chains (mcmc) which allow to efficiently carry out integration and optimization on a union of spaces of different dimensions. The procedures developed are applied to the problem of spectral analysis of sinusoids embedded in white Gaussian noise. A monte carlo study of the performances of different model selection procedures, derived from the developed algorithms and from classical criteria, is presented. We then show how it is possible to extend the previously proposed procedures to cases where the observation noise can be non-Gaussian or colored. We also show how these algorithms can be applied when the data undergo a thresholding phenomenon, preventing the observation of the noisy process beyond certain thresholds. The problem of deconvolution of continuous-time filtered and noisy point processes is also addressed in a Bayesian framework and solved by means of mcmc methods. The statistical model and the associated algorithms are applied to real spectrometry data.
  • Contribution of signal processing to the detection of aerodynamic instabilities of the axial compressor of a turbojet engine.

    Romuald PREVITALI, Patrick DUVAUT
    1998
    Discovering a means of systematically controlling the aerodynamic instabilities of turbojet compressors is one of the current challenges for aircraft engine manufacturers. It would give access to more efficient engines with minimal risk. The objective of this study is to bring a new look, that of the signal processor, on this problem of fluid dynamics. To do this, a non-destructive testing problem and a formulation in terms of source location are presented. They lead to efficient methods of single and multi-sensor detection.
  • Contribution of Hermit polynomials to non-gaussian modeling and associated statistical tests.

    David DECLERCQ, Patrick DUVAUT
    1998
    The objective of this thesis is to study the contributions of Hermit polynomials, when they have Gaussian random variables as arguments, to some fields of signal processing and statistics. A family of statistical tests of Gaussianity, called Hermit tests, has been introduced. The latter uses the orthonogonality of Hermit polynomials with respect to the Gaussian weight, through a sphericity statistic. We have conducted an asymptotic study of the Hermit test in the case of standard data, and a non-asymptotic study (with power comparison) in an invariant setting. The powers exhibited show that in addition to the advantage brought by the intrinsic modularity of Hermit tests, they exhibit good performances compared to the usual tests used. A class of non-linear / non-Gaussian processes, called h-arma, is studied. These consist of an arma-like linear filtering of a Gaussian input, followed by an instantaneous hermitian polynomial transformation. The use of hermits polynomials, and in particular the mehler and kibble-slepian formulas, allowed the writing of the temporal and spectral cumulants of these processes, as well as the non-asymptotic calculation of their empirical estimation variance. The identification of these models was first conducted in a supervised context, then in a blind context. The blind identification is confronted with the non-inversibility of these processes as soon as the polynomial non-linearity is no longer bi-univocal. After highlighting the limitations of traditional estimation methods (maximum likelihood, cumulant methods, etc.), we have employed stochastic mcmc algorithms, taking advantage of the augmentation of the model by hidden state variables. Implemented in the Bayesian paradigm, these methods provide a first solution to the identification of non-linear / non-invertible models.
  • Bayesian methods for image restoration and reconstruction: application to gammagraphy and photofission tomography.

    Guillaume STAWINSKI, Patrick DUVAUT
    1998
    This thesis is devoted to the development of Bayesian algorithms for the solution of inverse problems encountered in gamma radiography and photofission tomography. For each of these applications, the different statistical transformations undergone by the signal and due to the measurement system have been studied. Two possible modelizations of the measurement system have been determined for each of the applications: a relatively simple classical modelization and a new modelization based on cascade point processes. Bayesian em (expectation-maximization) and mcmc (markov chain monte carlo) algorithms for image restoration and reconstruction, based on these modelizations, have been developed and compared. It appeared experimentally that the modeling by cascade point processes does not significantly improve the results obtained from a classical modeling. In the context of gamma radiography, we then proposed two original approaches allowing an improvement of the results. The first one consists in introducing an inhomogeneous markov field as an a priori law, i.e. to vary spatially the regularization parameter associated with a classical Gaussian markov field. However, the estimation of the hyperparameters necessary for this approach remains a major problem. In the context of point source deconvolution, a second approach consists in introducing high level models. The image is modeled by a list of objects with known shapes but unknown number and parameters. The estimation is then performed using reversible jump CMM methods. This approach allows to obtain more accurate results than those obtained by a Markovian field modeling for reduced computation time.
  • Study of the applications of the flash process to passive listening systems and optimization by Bernoulli-Gaussian deconvolution.

    Marc ROLLET, Patrick DUVAUT
    1997
    The subject of this thesis was the study of the applications of a new process to passive listening systems and its optimization by a standard deconvolution technique. This work has two phases. The first one dealt with the application of a new concept called flash to broadband passive listening of radar signals in the microwave domain. In the first phase, after a definition of the context of use, a technique of direction finding by electronic scanning was studied. After having analyzed the analogy with the flash concept, properties were highlighted and several scenarios were considered. In a second step, the same approach was carried out for the study of a broadband frequency meter. The different scenarios were then numerically simulated and partial conclusions were drawn. The second phase then consisted in studying the use of a multi-pulse deconvolution technique to improve the results and the robustness of the devices under degraded rsb constraints. The approach, incremental in the integration of constraints, led mutatis mutandis to the adaptation and implementation of a Bayesian algorithm of Bernoulli-Gaussian deconvolution. The results obtained revealed a strong gain in terms of performance.
  • Joint detection - estimation from time-scale and time-frequency designs.

    Herve ROUSSEAU, Patrick DUVAUT
    1997
    The work summarized in this thesis has developed two joint detection-estimation methods applying a Bernoulli-Gaussian deconvolution algorithm (dbg) after a study of the specificities of the representations that are the multiresolution analysis and the short term fourier transform (tfct). The first method realizes a dbg in a non-Gaussian environment. The use of multiresolution analysis is justified by the Gaussianizing effect induced by a linear filtering from the projections on each scale via the hole algorithm. A dbg algorithm is then applied on these scales. These different results are then merged thanks to the redundancy of information from one scale to another. We obtain a new version of the dbg algorithm as an alternative to the original one when the additive noise is non-Gaussian such as the poissononian noise for example. The 2nd method performs an instantaneous frequency estimation (fi). This one is based on a conjecture that gives the tfct a convolutional structure between a kernel (depending on the time-frequency atom (tf) used during the tfct computation) and tf attributes such as fi and group delays. The latter 2 are assumed to be composite point-continuous processes (chirps). This structure allows two dual convolutional models on each of the temporal and frequency marginals. After an initial step of identification of the tf kernel, we propose a 3-step procedure. Step 1 uses a goniometry function to detect the main tf angles of the signal. For each of these angles, we then have a tf atom adapted to the associated linear chirp and the convolutional model to consider. Step 2 consists in performing the dbg on each tfct. Step 3 allows to obtain the fi by merging the different results. This Bayesian formulation has allowed us to obtain a powerful method for estimating these frequencies, bringing a gain of about 10 db compared to classical methods.
  • Monte Carlo algorithms for Bayesian estimation of hidden Markovian models. Application to radiation signal processing.

    Arnaud DOUCET, Patrick DUVAUT
    1997
    Hidden Markovian models (CMMs) are used to model a large number of signals in various domains, including the field of nuclear measurement. Except for a few simple cases, Bayesian estimation problems for CMMs do not admit analytical solutions. This thesis is devoted to the algorithmic solution of a part of these problems by monte carlo methods and to the application of these algorithms to radiation signal processing. After having given some elements on the radiation signals and the associated mmc, the bayesian estimation problems are formulated. We then propose a synthesis of monte carlo algorithms for online Bayesian estimation of non-linear and non-Gaussian mmc and propose several original extensions of existing methods. In the following chapters, offline estimation methods based on markov chain monte carlo methods (mcmc) are presented. First, data augmentation and two original simulated annealing algorithms based on data augmentation are proposed and studied for the estimation of states of linear models with jumps. A simulated annealing algorithm based on data augmentation is then proposed and studied for the estimation of parameters in the sense of maximum a posteriori of finite state mcm. Then we propose mcmc algorithms for Bayesian estimation of arma models driven by a finite and/or continuous mixture of Gaussians of unknown parameters. The first algorithm is a gibbs sampler. This algorithm suffers from several shortcomings, a second more efficient algorithm based on the concept of partial conditioning is proposed. It is applied to the estimation of arma models with impulse excitation as well as to the blind deconvolution of bernoulli-gauss processes. Finally, we propose a mcmc algorithm for the Bayesian estimation of models with non-Gaussian observations. Two original procedures for simulating the hidden state process are proposed. This algorithm is applied to the estimation of the intensity of a doubly stochastic fish process from count data.
  • Contribution of high-order correlations to the analysis of non-Gaussian textures.

    Christophe COROYER, Patrick DUVAUT
    1996
    No summary available.
  • Contribution of the bayesian method to pure line spectral analysis and high resolution goniometry.

    Frederic DUBLANCHET, Patrick DUVAUT
    1996
    This work is entirely devoted to the solution of the classical problems of pure line search in time series spectral analysis and of goniometric analysis in antenna processing. These are formulated as inverse problems and approached in the context of Bayesian estimation. In a high resolution analysis, the estimation of the number of sources or spectral lines (detection) plays a determining role insofar as it conditions the global quality of the results. The traditional methodological approach, which consists in separating the estimation of the number of sources from their location, suffers from a number of limitations. These are listed and explained in a first step. In order to overcome them, we propose to treat these two tasks jointly. To this end, we exploit an additional information related to the structure of the solution, namely its impulsive character. In the context of Bayesian statistical estimation, such a structure is satisfactorily described by composite random processes. Two types of probabilistic models are considered, inspired by the field of impulse deconvolution: the Bernoulli-Gaussian model and its fish-Gaussian extension. On an algorithmic level, the regularized solution is obtained by optimizing a mixed criterion, composed of a term of fidelity to the observed data and a term translating the a priori introduced on the solution: a unique criterion is formed, which includes the variable dimension of the solution. We show the interest of combinatorial exploration techniques to optimize the likelihood criteria used. Finally, examples of treatments highlight the substantial improvement obtained by this new approach.
  • Contribution of oblique multiresolution analysis and generalized likelihood ratio to partially cooperative recognition.

    Zyed TIRA, Patrick DUVAUT
    1996
    The monitoring of rotating machines of the edf nuclear power plants requires the identification of the shape of the transients of the vibration signals. The assignment of a shape is done relative to a reference set and is independent of the parameters of scale, amplitude, baseline and arrival time. The purpose of this thesis is to automate this pattern recognition procedure. Two approaches are proposed to solve this problem. A direct approach based on the generalized likelihood ratio. Its advantage is that it allows not only to recognize the shape of the transient, but also to estimate its unknown parameters. Its major drawback is that it requires a simple mathematical model of the shape. The second approach is based on a hierarchical decision tree. It uses two multiscale detectors as well as algorithms for parameter jump detection and line detection in a spectrum. These algorithms take into account the fact that the processed vibratory signals are sampled with a random step. The first multiscale detector is based on the extremum coding of the wavelet decomposition. It uses a redundant oblique multiresolution analysis, which is an extension of the one introduced by s. Mallat and is defined in this thesis. The second detector uses the generalized maximum likelihood technique to decompose the observed signal on four reference wavelets. The performance of these detectors is evaluated using horn curves. The hierarchical approach for pattern recognition is validated on a set of synthetic signals and evaluated on a set of real signals.
  • Contribution to the analysis of non-stationary and/or non-Gaussian signals.

    Claude JORAND, Patrick DUVAUT
    1994
    The wigner-ville transform (twv) is a well known tool in the analysis of non-stationary signals. Its non-linear structure is a source of interference between the components of the analyzed signal, which poses a problem when extracting the relevant information from the resulting image. Considered as a two-dimensional random process, this image is generally non-Gaussian. Thus, this transformation is the seat of three non-qualities: non-linearity, due to the operator, non-stationarity and non-Gaussianity, which affect the analyzed or produced signal. In this study, we are interested in the two non-qualities related to the signals: non-Gaussianity, where some techniques using statistics of order greater than two are discussed, and non-stationarity, which involves the wavelet transform as a second role. Finally, in an extended framework of twv, we meet simultaneously these two attributes. Another fundamental aspect of this work is a contribution to the transfer of signal processing techniques to the industrial domain, through the development of a non-stationary analysis software and its implementation in the vibratory analysis of an automotive engine.
  • Contribution to the choice of sub-array antenna architectures.

    Lionel HAYOUN, Patrick DUVAUT
    1993
    In order to reduce the material complexity and financial cost of a computational beamforming antenna array, it is often necessary to group the elementary sensors into sub-arrays. In this way, the fundamental characteristics of the initial array are preserved: transmitted and received power, gain, beamwidth. There is potentially a large number of ways to operate these groupings and it is legitimate to look for optimal configurations with respect to one or more given criteria. We propose and analyze in detail a number of optimal criteria. Some of them are related to the detection capability of the antenna, while others characterize the estimation performance. We have, moreover, demonstrated correlations between these different criteria, showing in particular that some of them are antagonistic and that there is therefore no universal law that optimizes all the criteria unanimously. The problem of the choice of the clustering law can be presented to the designer in two distinct forms. Either it will be a question of determining the best topology within a family of predefined laws, or it will be a question of finding the best law without initial arbitrariness. For this second approach, we propose to use the simulated annealing algorithm, which has proven to be well adapted to the cost functions involved. Finally, the numerous simulations carried out have allowed us to acquire a certain expertise, expressed in the form of general rules for the design of sub-array antenna architectures.
  • New methods of leak detection and location by acoustic emission.

    Pascal BOULANGER, Patrick DUVAUT
    1993
    Real-time monitoring of pressurized water nuclear power plant piping systems is moving towards the integration of numerical processing systems. In this respect, the acoustic emission method shows promising performances. Its principle is based on the passive listening of noises emitted by internal microdisplacements of a material under constraints which propagate in the form of elastic waves. The small amount of a priori information available about leakage signals has led us to deepen our understanding of the physical phenomena underlying the generation of noise induced by a flow. We gather all these results in the form of a leakage model linked to the geometry and the type of flow of the crack. The detection and localization problems are formulated according to the maximum likelihood principle. In detection, methods based on similarity information (correlation, tricorrelation) seem to give better results than classical methods (rms, envelope, filter bank). For localization, we propose a range of classical (generalized intercorrelation) and innovative (convolution, adaptive, higher order) methods. A last part is devoted to the study of higher order statistics. The analysis of estimators of higher order quantities for a family of nonlinear non-Gaussian random processes, the improvement of the performance of nonlinear prediction, the choice of an optimal order are discussed in simple analytical cases. Finally, some applications to leakage signals are presented.
Affiliations are detected from the signatures of publications identified in scanR. An author can therefore appear to be affiliated with several structures or supervisors according to these signatures. The dates displayed correspond only to the dates of the publications found. For more information, see https://scanr.enseignementsup-recherche.gouv.fr