Introduction to optimal vector quantization and its applications for numerics.
Summary
We present an introductory survey to optimal vector quantization and its first applications to Numerical Probability and, to a lesser extent to Information Theory and Data Mining. Both theoretical results on the quantization rate of a random vector taking values in R^d (equipped with the canonical Euclidean norm) and the learning procedures that allow to design optimal quantizers (CLVQ and Lloyd's I procedures) are presented. We also introduce and investigate the more recent notion of {\em greedy quantization} which may be seen as a sequential optimal quantization. A rate optimal result is established. A brief comparison with Quasi-Monte Carlo method is also carried out.
Topics of the publication
- Optimal vector quantization
- Greedy quantization
- Quantization tree
- Lloyd's I algorithm
- Competitive Learning Vector Quantization
- Stochastic gradient descent
- Learning algorithms
- Zador's Theorem
- Feynman-Kac's formula
- Variational inequality
- Optimal stopping
- Quasi-Monte Carlo method
- Nearest neighbor search
- Partial distance search
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr