Sequence-to-sequence Models and Attention Graham Neubig
Preliminaries: Language Models
Language Models • Language models are generative models of text s ~ P(x) “The Malfoys!” said Hermione. Harry was watching him. He looked like Madame Maxime. When she strode up the wrong staircase to visit himself. “I’m afraid I’ve definitely been suspended from power, no chance — indeed?” said Snape. He put his head back behind them and read groups as they crossed a corner and fluttered down onto their ink lamp, and picked up his spoon. The doorbell rang. It was a lot cleaner down in London. Text Credit: Max Deutsch (https://medium.com/deep-writing/)
Calculating the Probability of a Sentence I Y P ( X ) = P ( x i | x 1 , . . . , x i − 1 ) i =1 Next Word Context
Language Modeling w/ Neural Networks <s> I hate this movie RNN RNN RNN RNN RNN predict predict predict predict predict I hate this movie </s> • At each time step, input the previous word, and predict the probability of the next word
Conditional Language Models
Conditioned Language Models • Not just generate text, generate text according to some specification Input X Output Y ( Text ) Task Structured Data NL Description NL Generation English Japanese Translation Document Short Description Summarization Utterance Response Response Generation Image Text Image Captioning Speech Transcript Speech Recognition
Conditional Language Models J Y P ( Y | X ) = P ( y j | X, y 1 , . . . , y j − 1 ) j =1 Added Context!
(One Type of) Conditional Language Model (Sutskever et al. 2014) Encoder kono eiga ga kirai </s> LSTM LSTM LSTM LSTM LSTM I hate this movie LSTM LSTM LSTM LSTM argmax argmax argmax argmax argmax </s> I hate this movie Decoder
How to Pass Hidden State? • Initialize decoder w/ encoder (Sutskever et al. 2014) encoder decoder • Transform (can be different dimensions) encoder transform decoder • Input at every time step (Kalchbrenner & Blunsom 2013) decoder decoder decoder encoder
Methods of Generation
The Generation Problem • We have a model of P(Y|X), how do we use it to generate a sentence? • Two methods: • Sampling: Try to generate a random sentence according to the probability distribution. • Argmax: Try to generate the sentence with the highest probability.
Ancestral Sampling • Randomly generate words one-by-one. while y j-1 != “</s>”: y j ~ P(y j | X, y 1 , …, y j-1 ) • An exact method for sampling from P(X), no further work needed.
Greedy Search • One by one, pick the single highest-probability word while y j-1 != “</s>”: y j = argmax P(y j | X, y 1 , …, y j-1 ) • Not exact, real problems: • Will often generate the “easy” words first • Will prefer multiple common words to one rare word
Beam Search • Instead of picking one high-probability word, maintain several paths • Some in reading materials, more in a later class
Attention
Sentence Representations Problem! “You can’t cram the meaning of a whole %&!$ing sentence into a single $&!*ing vector!” — Ray Mooney • But what if we could use multiple vectors, based on the length of the sentence. this is an example this is an example
Basic Idea (Bahdanau et al. 2015) • Encode each word in the sentence into a vector • When decoding, perform a linear combination of these vectors, weighted by “attention weights” • Use this combination in picking the next word
Calculating Attention (1) • Use “query” vector (decoder state) and “key” vectors (all encoder states) • For each query-key pair, calculate weight • Normalize to add to one using softmax kono eiga ga kirai Key Vectors I hate a 1 =2.1 a 2 =-0.1 a 3 =0.3 a 4 =-1.0 Query Vector softmax α 1 =0.76 α 2 =0.08 α 3 =0.13 α 4 =0.03
Calculating Attention (2) • Combine together value vectors (usually encoder states, like key vectors) by taking the weighted sum kono eiga ga kirai Value Vectors * * * * α 1 =0.76 α 2 =0.08 α 3 =0.13 α 4 =0.03 • Use this in any part of the model you like
A Graphical Example
Attention Score Functions (1) • q is the query and k is the key • Multi-layer Perceptron (Bahdanau et al. 2015) a ( q , k ) = w | 2 tanh( W 1 [ q ; k ]) • Flexible, often very good with large data • Bilinear (Luong et al. 2015) a ( q , k ) = q | W k
Attention Score Functions (2) • Dot Product (Luong et al. 2015) a ( q , k ) = q | k • No parameters! But requires sizes to be the same. • Scaled Dot Product (Vaswani et al. 2017) • Problem: scale of dot product increases as dimensions get larger • Fix: scale by size of the vector a ( q , k ) = q | k p | k |
What do we Attend To?
Input Sentence • Like the previous explanation • But also, more directly • Copying mechanism (Gu et al. 2016) • Lexicon bias (Arthur et al. 2016)
Previously Generated Things • In language modeling, attend to the previous words (Merity et al. 2016) • In translation, attend to either input or previous output (Vaswani et al. 2017)
Various Modalities • Images (Xu et al. 2015) • Speech (Chan et al. 2015)
Hierarchical Structures (Yang et al. 2016) • Encode with attention over each sentence, then attention over each sentence in the document
Multiple Sources • Attend to multiple sentences (Zoph et al. 2015) • Libovicky and Helcl (2017) compare multiple strategies • Attend to a sentence and an image (Huang et al. 2016)
Intra-Attention / Self Attention (Cheng et al. 2016) • Each element in the sentence attends to other elements → context sensitive encodings! this is an example this is an example
How do we Evaluate?
Basic Evaluation Paradigm • Use parallel test set • Use system to generate translations • Compare target translations w/ reference
Human Evaluation • Ask a human to do evaluation • Final goal, but slow, expensive, and sometimes inconsistent
BLEU • Works by comparing n-gram overlap w/ reference • Pros: Easy to use, good for measuring system improvement • Cons: Often doesn’t match human eval, bad for comparing very different systems
METEOR • Like BLEU in overall principle, with many other tricks: consider paraphrases, reordering, and function word/content word difference • Pros: Generally significantly better than BLEU, esp. for high-resource languages • Cons: Requires extra resources for new languages (although these can be made automatically), and more complicated
Perplexity • Calculate the perplexity of the words in the held-out set without doing generation • Pros: Naturally solves multiple-reference problem! • Cons: Doesn’t consider decoding or actually generating output. • May be reasonable for problems with lots of ambiguity.
Questions?
Recommend
More recommend