Extracing Explanations from Deep Neural NetworksJoan Clarke Stage
Deep neural networks have become a key technology in domains like manufacturing, health care, or finance as they allow for predictions with high accuracy. However, there are many scenarios where highly accurate predictions alone are not enough and trust becomes crucial. Here, critical decisions must be complemented by explanations such that users are able to understand the results or the general behavior of the network. In this talk a practical approach for extracting information on the internal processes from neural networks is presented. For this purpose, simple decision trees are extracted from trained models that allow a user to understand the reasoning of the network. It is shown that simply fitting a decision tree to a learned model usually leads to unsatisfactory results in terms of accuracy and fidelity. Instead, it is demonstrated how to influence the structure of a neural network during training such that fitting a decision tree leads to significantly improved results.
Cédric Villani, Mathematician - member of the Academy of Sciences - 1st vice-president of the Parliamentary Office For Scientific and Technological Assessment (OPECST) – Member of Parliament Fields Medal,