lstms can learn syntax sensitive dependencies well but
play

LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling - PowerPoint PPT Presentation

LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom Motivation Language exhibits hierarchical structure [[The cat


  1. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom

  2. Motivation Language exhibits hierarchical structure [[The cat [that he adopted]] [sleeps]] …… but LSTMs work so well without explicit notions of structure. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  3. Number Agreement Number agreement example with two attractors (Linzen et al., 2016) Number agreement is a cognitively-motivated probe to distinguish hierarchical theories from purely sequential ones. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  4. Number Agreement is Sensitive to Syntactic Structure Number agreement reflects the dependency relation Models that can capture between subjects and verbs headedness should do better at number agreement LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  5. Overview ● Revisit the prior work of Linzen et al. (2016) that argues LSTMs trained on language modelling objectives fail to learn such dependencies. ● Investigate whether models that explicitly incorporate syntactic structure can do better, and how syntactic information should be encoded. ● Demonstrate that how the structure is built affects number agreement generalisation. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  6. Number Agreement Dataset Overview Train Test Sentences 141,948 1,211,080 Types 10,025 10,025 Tokens 3,159,622 26,512,851 Number agreement dataset is All intervening nouns must be of derived from dependency-parsed the same number Wikipedia corpus LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  7. Number Agreement Dataset Overview # Attractors # Instances % Instances n=0 1,146,330 94.7% n=1 52,599 4.3% All intervening nouns must be of n=2 9,380 0.77% the same number n=3 2,051 0.17% The vast majority of number n=4 561 0.05% agreement dependencies are sequential n=5 159 0.01% LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  8. First Part: Can LSTMs Learn Number Agreement Well? The model is trained with language modelling objectives Revisit the same question as Linzen et al. (2016): To what extent are LSTMs able to learn non-local syntax-sensitive dependencies in natural language? LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  9. Linzen et al. LSTM Number Agreement Error Rates Lower is better LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  10. Small LSTM Number Agreement Error Rates Lower is better LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  11. Larger LSTM Number Agreement Error Rates Capacity matters for capturing non-local structural dependencies Despite this, relatively minor perplexity difference (~10%) between H=50 and Lower is H=150 better LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  12. LSTM Number Agreement Error Rates Capacity and size of training corpus are not the full story Domain and training settings matter too Lower is better LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  13. Can Character LSTMs Learn Number Agreement Well? Character LSTMs have been used in various tasks, including machine translation, language modelling, and many others. + It is easier to exploit morphological cues. - Model has to resolve dependencies between sequences of tokens. - The sequential dependencies are much longer . LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  14. Character LSTM Agreement Error Rates State-of-the-art character LSTM (Melis et al., 2018) model on Hutter Prize, with 27M parameters. Trained, validated, and tested on the same data. Lower is Strong character LSTM model Consistent with earlier work performs much worse for (Sennrich, 2017) and potential better multiple attractor cases avenue for improvement LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  15. First Part Quick Recap ● LSTM language models are able to learn number agreement to a much larger extent than suggested by earlier work. Independently confirmed by Gulordava et al. (2018). ○ We further identify model capacity as one of the reasons for the ○ discrepancy. Model tuning is important. ○ ● A strong character LSTM language model performs much worse for number agreement with multiple attractors. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  16. Two Ways of Modelling Sentences LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  17. Three Concrete Alternatives for Modeling Sentences P( x ) meows cat the hungry Sequential LSTMs without Syntax P( x, y ) meows (VP (S cat )NP (NP the hungry Sequential LSTMs with Syntax (Choe and Charniak, 2016) P( x, y ) (S (VP meows (NP the hungry cat) RNNG (Dyer et al., 2016) Hierarchical inductive bias LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  18. Evidence of Headedness in the Composition Function Kuncoro et al. (2017) found evidence of syntactic headedness in RNNGs (Dyer et al., 2016) The discovery of syntactic heads would be useful for number agreement Inspection of composed representation through the attention weights LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  19. Experimental Settings ● All models are trained, validated, and tested on the same dataset. ● On the training split, the syntactic models are trained using predicted phrase-structure trees from the Stanford parser. At test time, we run the incremental beam search (Stern et al., 2017) procedure up ● to the main verb for both verb forms, and take the highest-scoring tree . ? (S (VP meows meow (NP the hungry cat) The most probable tree might potentially be different for the correct/incorrect verbs LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  20. Experimental Findings 50% error rate reductions for n=4 and n=5 Performance differences are significant ( p < 0.05) Lower is better LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  21. Perplexity Perplexity for syntactic models are obtained with importance Dev ppl. sampling (Dyer et al., 2016) LSTM LM 72.6 Seq. Syntactic LSTM 79.2 LSTM LM has the best perplexity RNNGs 77.9 despite worse number agreement performance LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

  22. Further Remarks: Confound in the Dataset LSTM language models largely succeed in number agreement ● In around 80% of cases with multiple attractors, the agreement controller coincides with the first noun . Key question : How do LSTMs succeed in this task? Identifying the syntactic structure Memorising the first noun Kuncoro et al., L2HM 2018 LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)

Recommend


More recommend