what gets echoed
play

What Gets Echoed? Understanding the Pointers in Explanations of - PowerPoint PPT Presentation

What Gets Echoed? Understanding the Pointers in Explanations of Persuasive Arguments David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan {david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu Explanations are important.


  1. What Gets Echoed? Understanding the “Pointers” in Explanations of Persuasive Arguments David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan {david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu

  2. Explanations are important. 1234567 1 (Keil, 2006) 2 (Ribeiro et al., 2016) 3 (Lipton, 2016) 4 (Guidotti et al., 2019) 5 (Miller, 2019) 6 (Doshi-Velez and Kim, 2019) 1/18 7 ...and so on.

  3. What is this? 2/18

  4. What is this? 8 3/18 8 (Wagner et al., 2019)

  5. What is this? 8 Explanandum Explanation 3/18 8 (Wagner et al., 2019)

  6. What about natural language explanations? 4/18

  7. Virgina Heffernan, writing in Wired “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise.” 5/18

  8. r/ChangeMyView 6/18

  9. r/ChangeMyView 6/18

  10. r/ChangeMyView 6/18

  11. r/ChangeMyView 6/18

  12. Pointers are common 7/18

  13. How do explanations selectively incorporate pointers from their explananda? 8/18

  14. Probability of echoing vs. word frequency One answer: 9/18

  15. A prediction task! 10/18

  16. The task 1. Take the set of unique stems in the explandum. 2. For every such stem s , we assign the label   1 if s is in the set of unique stems in the explanation 0 otherwise.  11/18

  17. What could affect pointer use? For example: 1. Non-contextual properties • IDF ( ↓ ) 2. OP and PC usage • Word length ( ↓ ) 3. How the word connects the OP and PC 4. General properties of the OP or PC 12/18

  18. What could affect pointer use? For example: 1. Non-contextual properties • POS tags (verb in OP: ↓ , verb in PC: ↑ , 2. OP and PC usage noun in OP: ↓ , noun in PC: ↓ ). 3. How the word • Term frequency ( ↑ ) connects the OP and • # of surface forms ( ↑ ) PC • in a quotation ( ↑ ) 4. General properties of the OP or PC 12/18

  19. What could affect pointer use? For example: 1. Non-contextual properties • Word is in both OP and PC ( ↑ ) 2. OP and PC usage • # of word’s surface forms in OP but not 3. How the word in PC ( ↓ ), and vice versa ( ↑ ) connects the OP • JS divergence between OP and PC POS and PC distributions for word ( ↓ ) 4. General properties of the OP or PC 12/18

  20. What could affect pointer use? For example: 1. Non-contextual properties • OP length ( ↓ ), PC length ( ↑ ) 2. OP and PC usage • Depth of PC in the thread ( ↑ ) 3. How the word • Difference between the avg. word connects the OP and lengths in OP and PC ( ↓ ) PC 4. General properties of the OP or PC 12/18

  21. Our features improve on LSTMs 13/18

  22. Our features improve on LSTMs 13/18

  23. Some parts of speech are more reliably predicted 9 14/18 9 (Reynolds and Flagg, 1976)

  24. Some parts of speech are more reliably predicted 9 14/18 9 (Reynolds and Flagg, 1976)

  25. Which features matter? 15/18

  26. Which features matter? 15/18

  27. Our features can improve the generation of explanations Pointer generator network, with coverage 10 + our features 16/18 10 (See et al., 2017; Klein et al., 2017)

  28. Our features can improve the generation of explanations Pointer generator → network, with coverage 10 + our features 16/18 10 (See et al., 2017; Klein et al., 2017)

  29. ...and increase copying 17/18

  30. ...and increase copying 17/18

  31. Takeaways Our Dataset: 1. We assemble a novel, large-scale dataset of naturally occurring explanations. 18/18

  32. Takeaways Our Dataset: Our Findings: 1. We assemble a 2. Pointers are novel, large-scale common. dataset of naturally occurring explanations. 18/18

  33. Takeaways Our Dataset: Our Findings: 1. We assemble a 2. Pointers are common. novel, large-scale 3. Importance of nouns. dataset of naturally occurring explanations. 18/18

  34. Takeaways Our Dataset: Our Findings: 1. We assemble a 2. Pointers are common. novel, large-scale 3. Importance of nouns. dataset of naturally 4. Non-contextual occurring properties for explanations. stopwords, contextual for content words. 18/18

  35. Takeaways Our Dataset: Our Findings: Our Features: 5. Improve on 1. We assemble a 2. Pointers are common. novel, large-scale prediction 3. Importance of nouns. dataset of naturally performance of 4. Non-contextual occurring vanilla LSTMs. properties for explanations. stopwords, contextual for content words. 18/18

  36. Takeaways Our Dataset: Our Findings: Our Features: 1. We assemble a 2. Pointers are common. 5. Improve on novel, large-scale prediction 3. Importance of nouns. dataset of naturally performance of 4. Non-contextual occurring vanilla LSTMs. properties for explanations. 6. Improve quality stopwords, contextual of generated for content words. explanations. 18/18

  37. Takeaways Our Dataset: Our Findings: Our Features: 1. We assemble a 2. Pointers are common. 5. Improve on novel, large-scale prediction 3. Importance of nouns. dataset of naturally performance of 4. Non-contextual occurring vanilla LSTMs. properties for explanations. 6. Improve quality of stopwords, contextual generated for content words. explanations. Thank you! github.com/davatk/what-gets-echoed Code + data: 18/18

More recommend