Rigorous Explanations for Machine Learning Models Joao Marques-Silva (joint work with A. Ignatiev and N. Narodytska) University of Lisbon, Portugal AITP 2019 Conference Obergurgl, Austria April 2019 1 / 42
Progress in automated reasoning • Automated reasoners (AR): – SAT – ILP 2 / 42
Progress in automated reasoning • Automated reasoners (AR): – SAT – ILP – ASP – SMT – FOL 2 / 42
Progress in automated reasoning • Automated reasoners (AR): – SAT – ILP – ASP – SMT – FOL – Reasoners as oracles – Reasoners within reasoners 2 / 42
Progress in automated reasoning & our work Propositional abduction 10 4 • Automated reasoners (AR): 1800 sec. timeout 10 3 – SAT – ILP 10 2 – ASP – SMT 10 1 – FOL AbHS+ – Reasoners as oracles 10 0 – Reasoners within 10 − 1 reasoners 1800 sec. timeout 10 − 2 10 − 3 10 − 3 10 − 2 10 − 1 10 0 10 1 10 2 10 3 10 4 Hyper ⋆ 2 / 42
Progress in automated reasoning & our work Model-based diagnosis 10 3 600 sec. timeout • Automated reasoners (AR): – SAT 10 2 – ILP – ASP – SMT 10 1 – FOL scrypto – Reasoners as oracles 10 0 – Reasoners within reasoners 600 sec. timeout 10 − 1 10 − 2 10 − 2 10 − 1 10 0 10 1 10 2 10 3 wboinc 2 / 42
Progress in automated reasoning & our work Axiom pinpointing for EL + 10 4 • Automated reasoners (AR): 3600 sec. timeout – SAT 10 3 – ILP – ASP 10 2 – SMT – FOL EL + SAT 10 1 – Reasoners as oracles – Reasoners within 10 0 reasoners 3600 sec. timeout 10 − 1 10 − 2 10 − 2 10 − 1 10 0 10 1 10 2 10 3 10 4 EL2MUS 2 / 42
The question: how can AR improve ML’s robustness? M. Vardi, MLMFM’18 Summit 3 / 42
Machine learning vs. automated reasoning Improve Reasoners Exploit ML (E ffi ciency) 4 / 42
Machine learning vs. automated reasoning Improve Reasoners Exploit ML (E ffi ciency) Improve ML Exploit Reasoners (Robustness) 4 / 42
Our work ... • Focus on classification problems 5 / 42
Our work ... • Focus on classification problems • Globally correct (ie rigorous) explanations for predictions made 5 / 42
Our work ... • Focus on classification problems • Globally correct (ie rigorous) explanations for predictions made • Disclaimer: first inroads into ML & XAI; comments welcome 5 / 42
Outline Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results 6 / 42
Some ML successes & expectations Circa 2017 • IBM Watson Opportunities for AI / ML (until 2025) • Deepmind AlphaGo Agriculture Healthcare – & AlphaZero $20bn $54bn addressable market savings • Image Recognition Energy $140bn • Speech Recognition savings Finance (US) Retail • Financial Services $34bn~$43bn $54bn + $41bn savings & revenue savings + revenue • Medical Diagnosis • ... Source: Goldman-Sachs 7 / 42
Many more applications expected source: Google 8 / 42
Many more applications expected source: Wikipedia � DARPA c source: Google 8 / 42
But ML models are brittle 9 / 42
But ML models are brittle Source: http://gradientscience.org/intro_adversarial/ 9 / 42
Also, some ML models are interpretable decision | rule lists | sets decision trees Ex. Vacation (V) Concert (C) Meeting (M) Expo (E) Hike (H) 0 0 1 0 0 e 1 1 0 0 0 1 e 2 0 0 1 1 0 e 3 1 0 0 1 1 e 4 0 1 1 0 0 e 5 0 1 1 1 0 e 6 1 1 0 1 1 e 7 10 / 42
Also, some ML models are interpretable decision | rule lists | sets if ¬ Meeting then Hike decision trees if ¬ Vacation then ¬ Hike Ex. Vacation (V) Concert (C) Meeting (M) Expo (E) Hike (H) 0 0 1 0 0 e 1 1 0 0 0 1 e 2 0 0 1 1 0 e 3 1 0 0 1 1 e 4 0 1 1 0 0 e 5 0 1 1 1 0 e 6 1 1 0 1 1 e 7 M? 0 1 1 V? 0 1 0 ? 10 / 42
But other ML models are not (interpretable)... � DARPA c Why does the NN predict a cat? 11 / 42
Sample of ongoing efforts • Verification of NNs : – Sound vs. unsound vs. complete [M.P. Kumar, VMCAI’19] – E.g. Reluplex: dedicated reasoning within SMT solver • Explanations for non-interpretable (ie black-box) models : – Until recently, most approaches heuristic-based 12 / 42
Outline Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results 13 / 42
What is eXplainable AI (XAI)? 14 / 42
What is eXplainable AI (XAI)? � DARPA c 14 / 42
Why XAI?
Why XAI?
Why XAI?
Why XAI?
Why XAI? c � DARPA 15 / 42
Relevancy of XAI c � DARPA 16 / 42
Relevancy of XAI & hundreds(?) of recent papers c � DARPA 16 / 42
How to XAI? Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor [Guerreiro et al., KDD’16, AAAI’18] – Compute local explanations ... 17 / 42
How to XAI? Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor [Guerreiro et al., KDD’16, AAAI’18] – Compute local explanations ... – ... offer no guarantees 17 / 42
How to XAI? Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor [Guerreiro et al., KDD’16, AAAI’18] – Compute local explanations ... – ... offer no guarantees Recent efforts on rigorous approaches – Compilation -based, e.g. for BNCs [Shih,Choi&Darwiche, IJCAI’18] ◮ Issues with scalability – Abduction -based, e.g. for NNs [Ignatiev,Narodytska,M.-S., AAAI’19] ◮ Issues with scalability 17 / 42
How to XAI? Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor [Guerreiro et al., KDD’16, AAAI’18] – Compute local explanations ... – ... offer no guarantees Recent efforts on rigorous approaches – Compilation -based, e.g. for BNCs [Shih,Choi&Darwiche, IJCAI’18] ◮ Issues with scalability – Abduction -based, e.g. for NNs [Ignatiev,Narodytska,M.-S., AAAI’19] ◮ Issues with scalability, but less significant 17 / 42
Some current challenges • For heuristic methods: lack of rigor (more later) 18 / 42
Some current challenges • For heuristic methods: lack of rigor (more later) • For rigorous methods: scalability, scalability, scalability... 18 / 42
Outline Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results 19 / 42
From ML model to logic � DARPA c
From ML model to logic � DARPA c formula F
From ML model to logic � DARPA c cube C formula F
From ML model to logic � DARPA c cube C formula F literal E
From ML model to logic � DARPA c cube C formula F literal E C ∧ F � E 20 / 42
From ML model to logic � DARPA c cube C formula F literal E C ∧ F � E Must be able to encode ML model E.g. SMT, ILP, etc. 20 / 42
Abductive explanations of ML models given a classifier F , a cube C and a prediction E , 21 / 42
Abductive explanations of ML models given a classifier F , a cube C and a prediction E , compute a ( subset- or cardinality- ) minimal C m ⊆ C s.t. 21 / 42
Abductive explanations of ML models given a classifier F , a cube C and a prediction E , compute a ( subset- or cardinality- ) minimal C m ⊆ C s.t. C m ∧ F � � ⊥ and C m ∧ F � E 21 / 42
Abductive explanations of ML models given a classifier F , a cube C and a prediction E , compute a ( subset- or cardinality- ) minimal C m ⊆ C s.t. C m ∧ F � � ⊥ and C m ∧ F � E iterative explanation procedure 21 / 42
Computing primes 1. C m ∧ F � � ⊥ 22 / 42
Computing primes 1. C m ∧ F � � ⊥ — tautology 22 / 42
Computing primes 1. C m ∧ F � � ⊥ — tautology 2. C m ∧ F � E 22 / 42
Computing primes 1. C m ∧ F � � ⊥ — tautology 2. C m ∧ F � E ⇔ C m � ( F → E ) 22 / 42
Computing primes 1. C m ∧ F � � ⊥ — tautology 2. C m ∧ F � E ⇔ C m � ( F → E ) C m is a prime implicant of F → E 22 / 42
Computing one minimal explanation • One subset-minimal explanation: Input: F under M , initial cube C , prediction E Output: Subset-minimal explanation C m begin for l ∈ C : if Entails( C \ { l } , F → E ) : C ← C \ { l } return C end 23 / 42
Computing one minimal explanation • One subset-minimal explanation: Input: F under M , initial cube C , prediction E Output: Subset-minimal explanation C m begin for l ∈ C : if Entails( C \ { l } , F → E ) : C ← C \ { l } return C end • One cardinality-minimal explanation: – Harder than computing subset-minimal explanation – Exploit implicit hitting set dualization – Details in earlier papers 23 / 42
Outline Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Encoding Neural Networks Results 24 / 42
Encodings NNs Input Hidden Output layer layer layer Input #1 Input #2 Output Input #3 Input #4 • Each layer (except first) viewed as a block – Compute x ′ given input x , weights matrix A , and bias vector b – Compute output y given x ′ and activation function 25 / 42
Recommend
More recommend