Statistical learning methods for global optimization.

Authors
  • CONTAL Emile
  • VAYATIS Nicolas
  • MASSART Pascal
  • VAYATIS Nicolas
  • MASSART Pascal
  • GARNIER Josselin
  • KRAUSE Andreas
  • PERCHET Vianney
  • GARIVIER Aurelien
  • GARNIER Josselin
  • KRAUSE Andreas
Publication date
2016
Publication type
Thesis
Summary This thesis is devoted to a rigorous analysis of equential global optimization algorithms. We place ourselves in a stochastic bandit model where an agent aims at determining the input of a system optimizing a criterion. This target function is not known and the agent sequentially performs queries to evaluate its value at the inputs it chooses. This function may not be convex and contain a large number of local optima. We address the difficult case where the evaluations are costly, which requires designing a rigorous selection of queries. We consider two objectives, on the one hand the optimization of the sum of the values received at each iteration, on the other hand the optimization of the best value found so far. This thesis follows the Bayesian optimization framework when the function is a realization of a known stochastic process, and also introduces a new approach to scheduling optimization where only comparisons of the function values are performed. We propose new algorithms and provide theoretical concepts to obtain performance guarantees. We give an optimization strategy that adapts to observations received by batch and not individually. A generic study of local supremums of stochastic processes allows us to analyze Bayesian optimization on nonparametric search spaces. We also show that our approach extends to natural non-Gaussian processes. We establish links between active learning and statistical learning of schedules and derive a potentially discontinuous function optimization algorithm.
Topics of the publication
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr