A theoretically founded over-penalization of the AIC criterion.

Authors
Publication date
2017
Publication type
Proceedings Article
Summary The fact that a slight over-penalization leads to a stabilization of model selection procedures is a phenomenon well known by specialists. Indeed, it has been noticed since the end of the 70's that adding a small positive quantity to classical penalized criteria such as AIC improves in good cases the prediction results, especially for small or moderate sample sizes. The main reason is that over-penalization tends to guard against over-learning. We propose the first general and theoretically sound over-penalization strategy and apply it to the AIC criterion. Very good results are observed by simulation.
Topics of the publication
    No themes identified
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr