Human-in-the-Loop Interpretability Prior
Isaac Lage1, Andrew Slavin Ross1, Been Kim2, Samuel J. Gershman1 & Finale Doshi-Velez1
1Harvard University & 2Google Brain
Poster: Today, 10:45 AM - 12:45 PM, Room 210 & 230 AB #119
Human-in-the-Loop Interpretability Prior Isaac Lage 1 , Andrew - - PowerPoint PPT Presentation
Human-in-the-Loop Interpretability Prior Isaac Lage 1 , Andrew Slavin Ross 1 , Been Kim 2 , Samuel J. Gershman 1 & Finale Doshi-Velez 1 1 Harvard University & 2 Google Brain Poster: Today, 10:45 AM - 12:45 PM, Room 210 & 230 AB #119
Isaac Lage1, Andrew Slavin Ross1, Been Kim2, Samuel J. Gershman1 & Finale Doshi-Velez1
1Harvard University & 2Google Brain
Poster: Today, 10:45 AM - 12:45 PM, Room 210 & 230 AB #119
clipart-library.com
How to use results to choose a better proxy? Which proxy?
Update model directly with results! No proxy!
Evaluate computationally No users!
No closed form Evaluate with user studies! Evaluate computationally No users!
No closed form Evaluate with user studies!
Candidate MAP 1: Likelihood = HIGH Candidate MAP 2: Likelihood = HIGH Candidate MAP 3: Likelihood = HIGH
Candidate MAP 1: Likelihood = HIGH Prior = ? Candidate MAP 2: Likelihood = HIGH Prior = ? Candidate MAP 3: Likelihood = HIGH Prior = ?
Similarity Based on Explanation Features
Similarity Based on Explanation Features User study 1: Prior = MEDIUM
User study 1: Prior = MEDIUM Similarity Based on Explanation Features Prior Estimate: Prior = HIGH?
User study 2: Prior = LOW Similarity Based on Explanation Features User study 1: Prior = MEDIUM
Similarity Based on Explanation Features User study 2: Prior = LOW User study 1: Prior = MEDIUM Prior Estimate: Prior = HIGH?
Similarity Based on Explanation Features User study 2: Prior = LOW User study 3: Prior = HIGH User study 1: Prior = MEDIUM
Poster: Today, 10:45 AM - 12:45 PM, Room 210 & 230 AB #119
Census Dataset MORE Interpretable Number of Iterations