distributed representations
play

Distributed Representations CMSC 473/673 UMBC September 27 th , - PowerPoint PPT Presentation

Distributed Representations CMSC 473/673 UMBC September 27 th , 2017 Some slides adapted from 3SLP Course Announement: Assignment 2 Due Wednesday October 18 th by 11:59 AM Capstone: Perform language id with maxent models on code-switched


  1. Distributed Representations CMSC 473/673 UMBC September 27 th , 2017 Some slides adapted from 3SLP

  2. Course Announement: Assignment 2 Due Wednesday October 18 th by 11:59 AM “Capstone:” Perform language id with maxent models on code-switched data

  3. Course Announement: Assignment 2 Due Wednesday October 18 th by 11:59 AM “Capstone:” Perform language id with maxent models on code-switched data 1. Develop intuitions about maxent models & feature design 4. Get credit for successfully implementing the gradient 5. Perform classification with the models

  4. Recap from last time…

  5. Maxent Objective: Log-Likelihood Wide range of (negative) numbers Sums are more stable Differentiating this becomes nicer (even though Z depends on θ ) The objective is implicitly defined with respect to (wrt) your data on hand

  6. Log-Likelihood Gradient Each component k is the difference between: the total value of feature f k in the training data and the total value the current model p θ thinks it computes for feature f k

  7. Log-Likelihood Derivative Derivation 𝜖𝐺 𝑙 𝑦 𝑗 , 𝑧 ′ 𝑞 𝑧 ′ 𝑦 𝑗 ) = ෍ 𝑔 𝑙 𝑦 𝑗 , 𝑧 𝑗 − ෍ ෍ 𝑔 𝜖𝜄 𝑙 𝑧 ′ 𝑗 𝑗

  8. N-gram Language Models given some context… w i-3 w i-2 w i-1 compute beliefs about what is likely… 𝑞 𝑥 𝑗 𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 ) ∝ 𝑑𝑝𝑣𝑜𝑢(𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 , 𝑥 𝑗 ) w i predict the next word

  9. Maxent Language Models given some context… w i-3 w i-2 w i-1 compute beliefs about what is likely… 𝑞 𝑥 𝑗 𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 ) ∝ softmax(𝜄 ⋅ 𝑔(𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 , 𝑥 𝑗 )) w i predict the next word

  10. Neural Language Models given some context… w i-3 w i-2 w i-1 create/use “ distributed representations”… e w e i-3 e i-2 e i-1 combine these matrix-vector θ wi C = f representations… product compute beliefs about what is likely… 𝑞 𝑥 𝑗 𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 ) ∝ softmax(𝜄 𝑥 𝑗 ⋅ 𝒈(𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 )) w i predict the next word

  11. Neural Language Models given some context… w i-3 w i-2 w i-1 create/use “ distributed representations”… e w e i-3 e i-2 e i-1 combine these matrix-vector θ wi C = f representations… product compute beliefs about what is likely… 𝑞 𝑥 𝑗 𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 ) ∝ softmax(𝜄 𝑥 𝑗 ⋅ 𝒈(𝑥 𝑗−3 , 𝑥 𝑗−2 , 𝑥 𝑗−1 )) w i predict the next word

  12. Word Similarity  Plagiarism Detection

  13. Distributional models of meaning = vector-space models of meaning = vector semantics Zellig Harris (1954): “oculist and eye - doctor … occur in almost the same environments” “If A and B have almost identical environments we say that they are synonyms.” Firth (1957): “You shall know a word by the company it keeps!”

  14. Continuous Meaning The paper reflected the truth.

  15. Continuous Meaning The paper reflected the truth. reflected paper truth

  16. Continuous Meaning The paper reflected the truth. reflected paper glean hide truth falsehood

  17. (Some) Properties of Embeddings Capture “like” (similar) words Mikolov et al. (2013)

  18. (Some) Properties of Embeddings Capture “like” (similar) words Capture relationships vector( ‘king’ ) – vector( ‘man’ ) + vector( ‘woman’ ) ≈ vector(‘queen’) vector( ‘Paris’ ) - vector( ‘France’ ) + vector( ‘Italy’ ) ≈ vector(‘Rome’) Mikolov et al. (2013)

  19. Semantic Projection

  20. Semantic Projection

  21. Semantic Projection

  22. “You shall know a word by the company it keeps!” Firth (1957) document (↓) -word (→) count matrix battle soldier fool clown As You Like It 1 2 37 6 Twelfth Night 1 2 58 117 Julius Caesar 8 12 1 0 Henry V 15 36 5 0

  23. “You shall know a word by the company it keeps!” Firth (1957) document (↓) -word (→) count matrix battle soldier fool clown As You Like It 1 2 37 6 Twelfth Night 1 2 58 117 Julius Caesar 8 12 1 0 Henry V 15 36 5 0 basic bag-of- words counting

  24. “You shall know a word by the company it keeps!” Firth (1957) document (↓) -word (→) count matrix battle soldier fool clown As You Like It 1 2 37 6 Twelfth Night 1 2 58 117 Julius Caesar 8 12 1 0 Henry V 15 36 5 0 Assumption: Two documents are similar if their vectors are similar

  25. “You shall know a word by the company it keeps!” Firth (1957) document (↓) -word (→) count matrix battle soldier fool clown As You Like It 1 2 37 6 Twelfth Night 1 2 58 117 Julius Caesar 8 12 1 0 Henry V 15 36 5 0 Assumption: Two words are similar if their vectors are similar

  26. “You shall know a word by the company it keeps!” Firth (1957) document (↓) -word (→) count matrix battle soldier fool clown As You Like It 1 2 37 6 Twelfth Night 1 2 58 117 Julius Caesar 8 12 1 0 Henry V 15 36 5 0 Assumption: Two words are similar if their vectors are similar Issue: Count word vectors are very large, sparse, and skewed!

  27. “You shall know a word by the company it keeps!” Firth (1957) context (↓) -word (→) count matrix apricot pineapple digital information aardvark 0 0 0 0 computer 0 0 2 1 data 0 10 1 6 pinch 1 1 0 0 result 0 0 1 4 sugar 1 1 0 0 Context: those other words within a small “window” of a target word

  28. “You shall know a word by the company it keeps!” Firth (1957) context (↓) -word (→) count matrix apricot pineapple digital information aardvark 0 0 0 0 computer 0 0 2 1 data 0 10 1 6 pinch 1 1 0 0 result 0 0 1 4 sugar 1 1 0 0 Context: those other words within a small “window” of a target word a cloud computer stores digital data on a remote computer

  29. “You shall know a word by the company it keeps!” Firth (1957) context (↓) -word (→) count matrix apricot pineapple digital information aardvark 0 0 0 0 computer 0 0 2 1 data 0 10 1 6 pinch 1 1 0 0 result 0 0 1 4 sugar 1 1 0 0 The size of windows depends on your goals The shorter the windows , the more syntactic the representation ± 1- 3 more “syntax - y” The longer the windows, the more semantic the representation ± 4- 10 more “semantic - y”

  30. “You shall know a word by the company it keeps!” Firth (1957) context (↓) -word (→) count matrix apricot pineapple digital information aardvark 0 0 0 0 computer 0 0 2 1 data 0 10 1 6 pinch 1 1 0 0 result 0 0 1 4 sugar 1 1 0 0 Context: those other words within a small “window” of a target word Assumption: Two words are similar if their vectors are similar Issue: Count word vectors are very large, sparse, and skewed!

  31. Four kinds of vector models Sparse vector representations 1. Mutual-information weighted word co-occurrence matrices Dense vector representations: 2. Singular value decomposition/Latent Semantic Analysis 3. Neural-network-inspired models (skip-grams, CBOW) 4. Brown clusters

  32. Shared Intuition Model the meaning of a word by “embedding” in a vector space The meaning of a word is a vector of numbers Contrast: word meaning is represented in many computational linguistic applications by a vocabulary index (“word number 545”) or the string itself

  33. What’s the Meaning of Life?

  34. What’s the Meaning of Life? LIFE’

  35. What’s the Meaning of Life? LIFE’ (.478, - .289, .897, …)

  36. “ Embeddings ” Did Not Begin In This Century Hinton (1986): “Learning Distributed Representations of Concepts” Deerwester et al. (1990): “Indexing by Latent Semantic Analysis” Brown et al. (1992): “Class -based n-gram models of natural language”

  37. Four kinds of vector models Sparse vector representations 1. Mutual-information weighted word co-occurrence matrices Dense vector representations: You already saw some of this in assignment 1 (question 3)! 2. Singular value decomposition/Latent Semantic Analysis 3. Neural-network-inspired models (skip-grams, CBOW) 4. Brown clusters

  38. Pointwise Mutual Information (PMI): Dealing with Problems of Raw Counts Raw word frequency is not a great measure of association between words It’s very skewed: “the” and “of” are very frequent, but maybe not the most discriminative We’d rather have a measure that asks whether a context word is particularly informative about the target word. (Positive) Pointwise Mutual Information ((P)PMI)

  39. Pointwise Mutual Information (PMI): Dealing with Problems of Raw Counts Raw word frequency is not a Pointwise mutual information : great measure of association Do events x and y co-occur more than if they between words were independent? It’s very skewed: “the” and “of” are very frequent, but maybe not the most discriminative We’d rather have a measure that asks whether a context word is particularly informative about the target word. (Positive) Pointwise Mutual Information ((P)PMI)

Recommend


More recommend