Extracing Explanations from Deep Neural Networks | hub.berlin Skip to main content
1 & 2 APR `20 STATION BERLIN
11 Apr 2019 | 13:00 - 13:20 | Big-Data.AI Summit

Extracing Explanations from Deep Neural Networks

Joan Clarke Stage

Deep neural networks have become a key technology in domains like manufacturing, health care, or finance as they allow for predictions with high accuracy. However, there are many scenarios where highly accurate predictions alone are not enough and trust becomes crucial. Here, critical decisions must be complemented by explanations such that users are able to understand the results or the general behavior of the network. In this talk a practical approach for extracting information on the internal processes from neural networks is presented. For this purpose, simple decision trees are extracted from trained models that allow a user to understand the reasoning of the network. It is shown that simply fitting a decision tree to a learned model usually leads to unsatisfactory results in terms of accuracy and fidelity. Instead, it is demonstrated how to influence the structure of a neural network during training such that fitting a decision tree leads to significantly improved results.

Programmes & Topics
10 Apr 2019 | 12:30 - 12:50
Big-Data.AI Summit
11 Apr 2019 | 15:20 - 15:40
Big-Data.AI Summit
10 Apr 2019 | 17:10 - 17:30
Big-Data.AI Summit
11 Apr 2019 | 13:30 - 14:00
Big-Data.AI Summit