From ML Successes to Applications ICIP’18 Tutorial on Interpretable Deep Learning 2
Black Box Models Huge volumes of data Solve task Computing power Deep Neural Network Information (implicit) ICIP’18 Tutorial on Interpretable Deep Learning 3
Black Box Models ICIP’18 Tutorial on Interpretable Deep Learning 4
Why interpretability ?
Why Interpretability ? We ¡need ¡interpretability ¡in ¡order ¡to: understand verify weaknesses system legal learn new aspects things from data ICIP’18 Tutorial on Interpretable Deep Learning 6
Why Interpretability ? 1) Verify that classifier works as expected Wrong decisions can be costly and dangerous “Autonomous car crashes, because “AI medical diagnosis system it wrongly recognizes …” misclassifies patient’s disease …” ICIP’18 Tutorial on Interpretable Deep Learning 7
Why Interpretability ? 2) Understand weaknesses & improve classifier Generalization error Generalization error + human experience ICIP’18 Tutorial on Interpretable Deep Learning 8
Why Interpretability ? 3) Learn new things from the learning machine “It's not a human move. I've Old promise: never seen a human play this “Learn about the human brain.” move.” (Fan Hui) ICIP’18 Tutorial on Interpretable Deep Learning 9
Why Interpretability ? 4) Interpretability in the sciences Learn about the physical / biological / chemical mechanisms. (e.g. find genes linked to cancer, identify binding sites …) ICIP’18 Tutorial on Interpretable Deep Learning 10
Why Interpretability ? 5) Compliance to legislation European Union’s new General “right to explanation” Data Protection Regulation Retain human decision in order to assign responsibility. “With interpretability we can ensure that ML models work in compliance to proposed legislation.” ICIP’18 Tutorial on Interpretable Deep Learning 11
Example: Autonomous Driving ICIP’18 Tutorial on Interpretable Deep Learning 12
Example: Medical Diagnosis ICIP’18 Tutorial on Interpretable Deep Learning 13
Example: Quantum Chemistry ICIP’18 Tutorial on Interpretable Deep Learning 14
From Input to Abstractions ICIP’18 Tutorial on Interpretable Deep Learning 15
Learning Hierarchical Representations ICIP’18 Tutorial on Interpretable Deep Learning 16
Learning Hierarchical Representations - Multiple neurons with similar structure, but with different weight parameters. - Compose them into a deep layered architecture. ICIP’18 Tutorial on Interpretable Deep Learning 17
Dimensions of Interpretability ICIP’18 Tutorial on Interpretable Deep Learning 18
Dimensions of Interpretability ICIP’18 Tutorial on Interpretable Deep Learning 19
Dimensions of Interpretability Model Analysis Decision Analysis "what does something predicted as a "why a given image is classified scooter typically look like." as a scooter" ICIP’18 Tutorial on Interpretable Deep Learning 20
Model analysis
Interpreting the Model Activation Maximization - find prototypical example of a category - find pattern maximizing activity of a neuron cheeseburger goose car complex regularizer simple regularizer (Simonyan et al. 2013) (Nguyen et al. 2016) ICIP’18 Tutorial on Interpretable Deep Learning 22
Interpreting the Model ICIP’18 Tutorial on Interpretable Deep Learning 23
Limitations of Global Interpretations ICIP’18 Tutorial on Interpretable Deep Learning 24
Making Deep Neural Nets Transparent ICIP’18 Tutorial on Interpretable Deep Learning 25
Decision analysis
Decision Analysis: LRP Black Box Layer-wise Relevance Propagation (LRP) (Bach et al., PLOS ONE, 2015) ICIP’18 Tutorial on Interpretable Deep Learning 27
Decision Analysis: LRP Classification cat rooster dog ICIP’18 Tutorial on Interpretable Deep Learning 28
Decision Analysis: LRP Idea: Redistribute the evidence for class What makes this image a “rooster image” ? rooster back to image space. ICIP’18 Tutorial on Interpretable Deep Learning 29
Decision Analysis: LRP Theoretical ¡interpretation Deep ¡Taylor ¡Decomposition (Montavon ¡et ¡al., ¡2017) ICIP’18 Tutorial on Interpretable Deep Learning 30
Decision Analysis: LRP Explanation cat rooster dog ICIP’18 Tutorial on Interpretable Deep Learning 31
Decision Analysis: LRP Heatmap ¡of ¡prediction ¡“3” Heatmap ¡of ¡prediction ¡“9” ICIP’18 Tutorial on Interpretable Deep Learning 32
Other Explanation Methods ICIP’18 Tutorial on Interpretable Deep Learning 33
Recommend
More recommend