Interpretability of Machine Learning for Computer Vision Xinshuo Weng* *Most slides borrowed from CVPR 2018 tutorial and Stanford
Type of interpretability methods
Type of interpretability methods
Type of interpretability methods
Type of interpretability methods
Understanding Models at Different Granularity ● What is a unit doing? ● What are all the units are doing? ● How units are relevant to prediction? Understanding the explainable model.
What is a unit doing? - Visualize the individual unit ● Visualize the filter Krizhevsky et al, “ImageNet Classification with Deep Convolutional Neural Networks”, NIPS, 2012.
What is a unit doing? - Visualize the individual unit ● Visualize the filter ● Visualize the activation Yosinski et al, “Understanding Neural Networks Through Deep Visualization”, ICML, 2015.
What is a unit doing? - Visualize the individual unit ● Visualize the filter ● Visualize the activation ● Visualize the corresponding images ● Top activated images (NN) Yosinski et al, “Understanding Neural Networks Through Deep Visualization”, ICML, 2015.
What is a unit doing? - Visualize the individual unit ● Visualize the filter ● Visualize the activation ● Visualize the corresponding images ● Top activated images (NN) ● Deconvolution Zeiler and Fergus, “Visualizing and Understanding Convolutional Networks”, ECCV, 2014.
What is a unit doing? - Visualize the individual unit ● Visualize the filter ● Visualize the activation ● Visualize the corresponding images ● Top activated images (NN) ● Deconvolution ● Back-propagation
What is a unit doing? - Visualize the individual unit ● Visualize the filter ● Visualize the activation ● Visualize the corresponding images ● Top activated images (NN) ● Deconvolution ● Back-propagation Erhan et al, “Visualizing Higher-Layer Features of a Deep Network”, University of Montreal, 2009.
What is a unit doing? - Visualize the individual unit ● Visualize the filter ● Visualize the activation ● Visualize the corresponding images ● Top activated images (NN) ● Deconvolution ● Back-propagation Springenberg et al, “Striving for Simplicity: the All Convolutional Net”, ICLR 2015.
What are all the units doing? ● Visualize the features ● Dimensionality reduction: t-SNE PCA t-SNE Maaten and Hinton, “Visualizing Data using t-SNE”, JMLR, 2008.
What are all the units doing? ● Visualize the features ● Dimensionality reduction: T-SNE ● Visualize the corresponding images ● Top activated image (NN) Krizhevsky et al, “ImageNet Classification with Deep Convolutional Neural Networks”, NIPS, 2012.
What are all the units doing? ● Visualize the features ● Dimensionality reduction: T-SNE ● Visualize the corresponding images ● Top activated image (NN) ● Back-propagation Ulyanov et al, “Deep Image Prior”, CVPR 2018.
What are all the units doing? ● Visualize the features ● Dimensionality reduction: T-SNE ● Visualize the corresponding images ● Top activated image (NN) ● Back-propagation Ulyanov et al, “Deep Image Prior”, CVPR 2018.
What are all the units doing? ● Visualize the features ● Dimensionality reduction: T-SNE ● Visualize the corresponding images ● Top activated image (NN) ● Back-propagation ● Image Synthesis Dosovitskiy And Brox, “Generating Images with Perceptual Similarity Metrics based on Deep Networks”, NIPS 2016.
What are all the units doing? ● Visualize the features ● Dimensionality reduction: T-SNE ● Visualize the corresponding images ● Top activated image (NN) ● Back-propagation ● Image Synthesis Dosovitskiy And Brox, “Generating Images with Perceptual Similarity Metrics based on Deep Networks”, NIPS 2016.
What are all the units doing? ● From qualitative to quantitative analysis: Network Dissection Bau et al, “Network Dissection: Quantifying Interpretability of Deep Visual Representations”, CVPR 2017.
What are all the units doing? ● From qualitative to quantitative analysis: Network Dissection Bau et al, “Network Dissection: Quantifying Interpretability of Deep Visual Representations”, CVPR 2017.
How units are relevant to prediction? Understanding the explainable model ● Ablation study: occlusion effect Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.
How units are relevant to prediction? Understanding the explainable model ● Ablation study: occlusion effect Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.
How units are relevant to prediction? Understanding the explainable model ● Ablation study: occlusion effect Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.
How units are relevant to prediction? Understanding the explainable model ● Ablation study: occlusion effect Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.
How units are relevant to prediction? Understanding the explainable model ● Ablation study: occlusion effect ● For sequential modeling: true for different model configuration ● Range of context (memory) is limited – 200 tokens ● Order matters in nearby context (not long-range context) – 50 tokens Khandelwal, et al, “Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context”, ACL 2018.
Conclusion ● How do we improve the mode based on the interpretability?
Recommend
More recommend