Erroneous models in neural networks and their threats for formal verification.

Authors
  • VIOT Augustin
  • LUSSIER Benjamin
  • SCHON Walter
  • GERONIMI Stephane
  • TACCHELLA Armando
Publication date
2020
Publication type
Proceedings Article
Summary This article explains why current dependability techniques are not suitable for neural networks (NN). It also shows with an experiment that we need to justifiably trust neural networks modeling before formal verification can be used for critical applications.
Topics of the publication
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr