explaining the unexplained a cl class enhanced
play

Explaining the Unexplained: A CL CLass-Enhanced Attentive Response - PowerPoint PPT Presentation

Explaining the Unexplained: A CL CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks Devinder Kumar* , Alexander Wong & Graham W. Taylor Current Approaches Heatmap/Attention based! Deconvolution:


  1. Explaining the Unexplained: A CL CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks Devinder Kumar* , Alexander Wong & Graham W. Taylor

  2. Current Approaches – Heatmap/Attention based! Deconvolution: Zeiler et.al. ECCV 14 Guided backpropagation: ICLR 2015 Prediction Difference: Zintgraf et. al. ICLR 2017 Saliency: Simonyan et.al. Deep Taylor Decomp. CVPR 2013 Montavon et. al. PR journal 2017

  3. Interpretations Output Interpretation Heatmap Input Focuses on right areas: 3 DNN Looks correct! Probably focuses 2 on correct part, DNN but why 3? Focuses on wrong 3 part, curve might DNN be two; but why not 3 or 5 or 6?

  4. Class Enhanced Attentive Response (CLEAR) Map Binary Heatmap CLEAR Map

  5. Class Enhanced Attentive Response (CLEAR) Output response at layer l Output response of layer l given Dominant Class Response Dominant Response

  6. Class Enhanced Attentive Response (CLEAR) Map

  7. Interpretation Input Output Heatmap CLEAR map Interpretation Major part of Focuses on 3 the positive right areas: DNN focus is of 3 Looks correct! Major part of Focuses on wrong the positive part, curve might 2 focus represents DNN be two; but why 2 not 3 or 5 or 6? Major part of Probably focuses negative focus 3 on correct part, is 3; higher DNN but why 3? activation than any other class CLEAR color 4 0 1 2 3 5 6 7 8 9 map

  8. MNIST RESULTS Correctly Classified Wrongly Classified

  9. SVHN RESULTS Correctly Classified Wrongly Classified

  10. MNIST & SVHN RESULTS

  11. Stanford Dog Dataset Results

  12. Stanford Dog Dataset Results

  13. Stanford Dog Dataset Results

  14. Conclusion • Sparsity in the individual response maps from the last layer kernels : same pattern for all datasets considered. • Evidence for classes tend to come from very specific localized regions. • CLEAR maps enable the visualization of not only the areas of interest that predominantly influence the decision-making process, but also the degree of influence as well as the dominant class of influence in these areas. • Showed efficacy of CLEAR maps both quantitatively and qualitatively.

  15. Thank You! devinder.kumar@uwaterloo.ca http://devinderkumar.com

Recommend


More recommend