one explanation does not fit all a toolkit and taxonomy
play

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI - PowerPoint PPT Presentation

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Ronny Luss* IBM Research AI *Speaker Joint Work with AIX360 Team at IBM Research. Data Council NYC, November 2019. 1 2019 IBM Corporation Agenda Why


  1. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Ronny Luss* IBM Research AI *Speaker Joint Work with AIX360 Team at IBM Research. Data Council NYC, November 2019. 1 × 2019 IBM Corporation

  2. Agenda • Why Explainable AI? • Types and Methods for Explainable AI • AIX360 • CEM-MAF Example • FICO Example (BRCG and Protodash) 2 × 2019 IBM Corporation

  3. AI IS NOW USED IN MANY HIGH-STAKES DECISION MAKING APPLICATIONS Admission Credit Employment Sentencing 3 × 2019 IBM Corporation

  4. WHAT DOES IT TAKE TO TRUST A DECISION MADE BY A MACHINE (OTHER THAN THAT IT IS 99% ACCURATE) Is it fair? “Why” did it make Is it accountable? this decision? 4 × 2019 IBM Corporation

  5. THE QUEST FOR “EXPLAINABLE AI” 5 × 2019 IBM Corporation

  6. BUT WHAT ARE WE ASKING FOR? 6 × 2019 IBM Paul Nemitz , Principal Advisor, European Commission Corporation Talk at IBM Research, Yorktown Heights, May, 4, 2018

  7. WHY EXPLAINABLE AI? Simplification Understanding what’s truly happening can help build simpler systems. Insight Check if code has comments 7 × 2019 IBM Corporation

  8. WHY EXPLAINABLE AI? (CONTINUED) Debugging Can help to understand what is wrong with a system. Self driving car slowed down but wouldn’t stop at red light??? 8 × 2019 IBM Corporation

  9. WHY EXPLAINABLE AI? (CONTINUED) Existence of Confounders Can help to identify spurious correlations. Pneumonia Diabetes 9 × 2019 IBM Corporation

  10. WHY EXPLAINABLE AI? (CONTINUED) Fairness Robustness and Generalizability Is the decision making system fair? Is the system basing decisions on the correct features? Wide Spread Adoption 10 × 2019 IBM Corporation Is the decision making system fair? Is the system basing decisions on the correct features?

  11. Agenda • Why Explainable AI? • Types and Methods for Explainable AI • AIX360 • CEM-MAF Example • FICO Example (BRCG and Protodash) 11 × 2019 IBM Corporation

  12. AIX360: COMPETITIVE LANDSCAPE Toolkit Data Directly Local Global Custom Metrics Explanations Interpretable Post-hoc Post-hoc Explanation IBM AIX360 2 2 3 1 1 2 ✓ ✓ Seldon Alibi ✓ ✓ ✓ Oracle Skater ✓ ✓ ✓ H2o ✓ ✓ ✓ Microsoft Interpret ✓ Ethical ML ✓ DrWhyDalEx All algorithms of AIX360 are developed by IBM Research AIX360 also provides demos, tutorials, and guidance on explanations for different use cases. Paper: One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. https://arxiv.org/abs/1909.03012v1 12 × 2019 IBM Corporation

  13. THREE DIMENSIONS OF EXPLAINABILITY One explanation does not fit all: There are many ways to explain things. vs. directly interpretable post hoc interpretation Decision rule sets and trees are simple enough Probe a black-box with a companion model. The for people to understand. Supervised learning of black box model provides actual predictions these models is directly interpretable. while the interpretation is thru the companion model vs. Local (instance-level) Global (model-level) Shows the entire predictive model to the user to Only show the explanations associated with help them understand it (e.g. a small decision individual predictions (i.e. what was it about this tree, whether obtained directly or in a post hoc particular person that resulted in her loan being manner). denied). static vs. interactive (visual analytics) The interpretation is simply presented to the The user can interact with interpretation. user. 13 × 2019 IBM Corporation

  14. One-shot static or interactive explanations? • EXPLAINABILITY interactive TAXONOMY static Understand data or model? tabular ? model image data text Explanations as samples, Explanations for individual samples distributions or features? (local) or overall behavior (global)? samples global distributions features local A directly interpretable A directly interpretable ? ProtoDash DIP-VAE model or posthoc model or posthoc explanations? explanations? (Learning (Case-based post-hoc self-explaining meaningful post-hoc reasoning) direct features) TED BRCG or A surrogate model or Explanations based on GLRM visualize behavior? samples or features? (Persona-specific surrogate visualize explanations) (Easy to understand samples features rules) ProfWeight ? ProtoDash CEM or CEM-MAF (Learning accurate 14 × 2019 IBM interpretable model) (Case-based reasoning) (Feature-based explanations) Corporation

  15. EXPLANATION METHOD TYPES Directly (global) interpretable Decision rule sets and trees are simple enough for people to understand. Decision Tree Rule List (Wang and Rudin 2016) (Quinlan 1987) 15 × 2019 IBM Corporation

  16. EXPLANATION METHOD TYPES Directly (global) interpretable Boolean Decision Rules via Column Generation (BRCG): (Dash et. al. 2018) • DNF formulas (OR of ANDs) with small clauses to predict a binary target. A variant is in AIX360. • Exponential number of possible clauses This technique won the NeurIPS ‘18 FICO xML • Fitting with DNFs as a Mixed Integer Program. Challenge !! • Column Generation • Use few clauses to start with – solve the MIP. • Use a Pricing Problem on dual variables to identify the best clauses that still increase prediction accuracy – efficient step. • Iterate - stop when nothing more can be added. • Scales to datasets of size ~ 10000 samples. 16 × 2019 IBM Corporation

  17. EXPLANATION METHOD TYPES (CONTINUED) Post hoc interpretation Start with a black box model and probe into it with a companion model to create interpretations. (Deep) Neural Network Ensembles 17 × 2019 IBM Corporation

  18. EXPLANATION METHOD TYPES (CONTINUED) Post hoc (global) interpretation Simple Model Complex Model (Decision Tree, Random (Deep Neural forests, smaller neural Network) Can you transfer information from a network) pre-trained neural network to this simple model ? 18 × 2019 IBM Corporation

  19. EXPLANATION METHOD TYPES (CONTINUED) Post hoc (global) interpretation Knowledge Distillation Prof-Weigh (Dhurandhar et. al. 2018) (Hinton et. al. 2015) t Re-train a simple model by weighing samples. Weights obtained by looking at inner layers of Complex Model. Weight= (p 1 + p 2 +p 3 +p 4 )/4 Logistic p Re-train a simple model with temperature Probe scaled soft scores of complex model. 1 Logistic Probe p Works Well Logistic 2 p Probe When Simple Model complexity When Simple Model’s complexity is comparable Logistic is very small compared to Complex Model. 3 to Complex Model –ideal for compression Probe p 4 High -> Easy sample Low->Difficult sample 19 × 2019 IBM Corporation

  20. EXPLANATION METHOD TYPES (CONTINUED) Post hoc (local) interpretation Saliency Maps (Sinmoyan et. al. 2013) 20 × 2019 IBM Corporation

  21. EXPLANATION METHOD TYPES (CONTINUED) Post hoc (local) interpretation Contrastive Explanations – “Pertinent Negatives” (CEM-MAF): (Dhurandhar et. al. 2018) 21 × 2019 IBM Corporation

  22. ONE EXPLANATION DOES NOT FIT ALL – DIFFERENT STAKEHOLDERS Different stakeholders require explanations for different purposes and with different objectives. Explanations will have to be tailored to their needs. End users “Why did you recommend this treatment?” Who: Physicians, judges, loan officers, teacher evaluators Why: trust/confidence, insights(?) Affected users “Why was my loan denied? How can I be approved?” Who: Patients, accused, loan applicants, teachers Why: understanding of factors Regulatory bodies “Prove that your system didn't discriminate.” Who: EU (GDPR), NYC Council, US Gov’t, etc. Why: ensure fairness for constituents AI system builders/stakeholders “Is the system performing well? How can it be improved?“ Who: EU (GDPR), NYC Council, US Gov’t, etc. Why: ensure or improve performance 22 × 2019 IBM Corporation

  23. Agenda • Why Explainable AI? • Types and Methods for Explainable AI • AIX360 • CEM-MAF Example • FICO Example (BRCG and Protodash) 23 × 2019 IBM Corporation

  24. AI EXPLAINABILITY 360 (V0.1.0) 24 × 2019 IBM Corporation

  25. AI EXPLAINABILITY 360 (V0.1.0) 25 × 2019 IBM Corporation

  26. AI EXPLAINABILITY 360 (V0.1.0) 26 × 2019 IBM Corporation

  27. AI EXPLAINABILITY 360 (V0.1.0) 27 × 2019 IBM Corporation

  28. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 28 × 2019 IBM Corporation

  29. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 29 × 2019 IBM Corporation

  30. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 30 × 2019 IBM Corporation

  31. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 31 × 2019 IBM Corporation

  32. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 32 × 2019 IBM Corporation

  33. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 33 × 2019 IBM Corporation

  34. AI EXPLAINABILITY 360 (V0.1.0): CONTRASTIVE EXPLANATIONS VIA CEM-MAF 34 × 2019 IBM Corporation

  35. AI EXPLAINABILITY 360 (V0.1.0): CREDIT APPROVAL TUTORIAL 35 × 2019 IBM Corporation

  36. AI EXPLAINABILITY 360 (V0.1.0): CREDIT APPROVAL TUTORIAL Sample of FICO HELOC data 36 × 2019 IBM Corporation

  37. AI EXPLAINABILITY 360 (V0.1.0): CREDIT APPROVAL TUTORIAL BRCG requires data to be binarized 37 × 2019 IBM Corporation

Recommend


More recommend