vector semantics and embeddings
play

Vector Semantics and Embeddings CSE354 - Spring 2020 Natural - PowerPoint PPT Presentation

Vector Semantics and Embeddings CSE354 - Spring 2020 Natural Language Processing Tasks Dimensionality Reduction Vectors which represent words Recurrent Neural Network and how? or sequences Sequence Models Objective To


  1. Vector Semantics and Embeddings CSE354 - Spring 2020 Natural Language Processing

  2. Tasks ● Dimensionality Reduction ● Vectors which represent words ● Recurrent Neural Network and how? or sequences Sequence Models

  3. Objective To embed: convert a token (or sequence) to a vector that represents meaning .

  4. Objective To embed: convert a token (or sequence) to a vector that represents meaning, or is useful to perform downstream NLP application.

  5. Objective embed port

  6. Objective 0 … embed port 0 1 … 0

  7. Objective Prefer dense vectors ● Less parameters (weights) for one-hot is sparse vector machine learning model. ● May generalize better implicitly. 0 ● May capture synonyms … embed port 0 For deep learning, in practice, they work 1 better. Why? Roughly, less parameters … 0 becomes increasingly important when you are learning multiple layers of weights rather than just a single layer.

  8. Objective Prefer dense vectors ● Less parameters (weights) for one-hot is sparse vector machine learning model. ● May generalize better implicitly. 0 ● May capture synonyms … embed port 0 For deep learning, in practice, they work 1 better. Why? Roughly, less parameters … 0 becomes increasingly important when you are learning multiple layers of weights rather than just a single layer. (Jurafsky, 2012)

  9. Objective Prefer dense vectors ● Less parameters (weights) for one-hot is sparse vector machine learning model. ● May generalize better implicitly. 0 10 ● May capture synonyms … embed port 0 For deep learning, in practice, they work 1 18 better. Why? Roughly, less parameters … 10 10 0 becomes increasingly important when you are 2 3 5 learning multiple layers of weights rather than 9 just a single layer. 0 (Jurafsky, 2012) 0 5 10 15 20

  10. Objective To embed: convert a token (or sequence) to a vector that represents meaning.

  11. Objective To embed: convert a token (or sequence) to a vector that represents meaning. Wittgenstein, 1945: “ The meaning of a word is its use in the language ” Distributional hypothesis -- A word’s meaning is defined by all the different contexts it appears in (i.e. how it is “distributed” in natural language). Firth, 1957: “ You shall know a word by the company it keeps ”

  12. Objective To embed: convert a token (or sequence) to a vector that represents meaning. Wittgenstein, 1945: “ The meaning of a word is its use in the language ” Distributional hypothesis -- A word’s meaning is defined by all the different contexts it appears in (i.e. how it is “distributed” in natural language). Firth, 1957: “ You shall know a word by the company it keeps ” The nail hit the beam behind the wall.

  13. Distributional Hypothesis The nail hit the beam behind the wall.

  14. Objective 0.53 embed 1.5 port 3.21 -2.3 .76

  15. Objective port .n.1 (a place (seaport or airport) where people and merchandise can enter or leave a country) port .n.2 port wine (sweet dark-red dessert wine originally from Portugal) 0.53 port .n.3, embrasure, porthole (an opening (in a embed 1.5 wall or ship or armored vehicle) for firing port 3.21 through) -2.3 larboard, port .n.4 (the left side of a ship or .76 aircraft to someone who is aboard and facing the bow or nose) interface, port .n.5 ((computer science) computer circuit consisting of the hardware and associated circuitry that links one device with another (especially a computer and a hard disk drive or other peripherals))

  16. How? 1. One-hot representation 2. Selectors (represent context by “multi-hot” representation) 3. From PCA/Singular Value Decomposition (Know as “Latent Semantic Analysis” in some circumstances) Tf-IDF: Term Frequency, Inverse Document Frequency, PMI: Point-wise mutual information, ...etc…

  17. How? 1. One-hot representation 2. Selectors (represent context by “multi-hot” representation) 3. From PCA/Singular Value Decomposition (Know as “Latent Semantic Analysis” in some circumstances) “Neural Embeddings”: 4. Word2vec 5. Fasttext 6. Glove 7. Bert

  18. How? 1. One-hot represent 2. Selectors (represent context by “multi-hot” representation) 3. From PCA/Singular Value Decomposition (Know as “Latent Semantic Analysis” in some circumstances) “Neural Embeddings”: 0 …, word1, word2, bill , word3, word4, ... 1 4. Word2vec 0 5. Fasttext 0 0 6. Glove 1 7. Bert 0 0 0 ...

  19. How? 1. One-hot represent 2. Selectors (represent context by “multi-hot” representation) 3. From PCA/Singular Value Decomposition (Know as “Latent Semantic Analysis” in some circumstances) “Neural Embeddings”: 4. Word2vec 5. Fasttext 6. Glove 7. Bert

  20. How? 1. One-hot represent 2. Selectors (represent context by “multi-hot” representation) 3. From PCA/Singular Value Decomposition (Know as “Latent Semantic Analysis” in some circumstances) “Neural Embeddings”: 4. Word2vec 5. Fasttext 6. Glove 7. Bert

  21. How? 1. One-hot represent 2. Selectors (represent context by “multi-hot” representation) 3. From PCA/Singular Value Decomposition (Know as “Latent Semantic Analysis” in some circumstances) “Neural Embeddings”: 4. Word2vec 5. Fasttext 6. Glove 7. Bert

  22. SVD-Based Embeddings Singular Value Decomposition...

  23. Concept, In Matrix Form: columns: p features f1, f2, f3, f4, … fp o1 o2 o3 … rows: n observations o n

  24. SVD-Based Embeddings f1, f2, f3, f4, … fp o1 o2 o3 … o n

  25. Dimensionality reduction SVD-Based Embeddings -- try to represent with only p’ dimensions f1, f2, f3, f4, … fp c1, c2, c3, c4, … cp’ o1 o1 o2 o2 o3 o3 … … o n o n

  26. Concept: Dimensionality Reduction in 3-D, 2-D, and 1-D P = 2 P’ = 1 Data (or, at least, what we want from the data) may be accurately represented with less dimensions.

  27. Concept: Dimensionality Reduction in 3-D, 2-D, and 1-D P = 2 P = 3 P’ = 1 P’ = 2 Data (or, at least, what we want from the data) may be accurately represented with less dimensions.

  28. Concept: Dimensionality Reduction Rank: Number of linearly independent columns of A. (i.e. columns that can’t be derived from the other columns through addition). 1 -2 3 Q: What is the rank of this matrix? 2 -3 5 1 1 0

  29. Concept: Dimensionality Reduction Rank: Number of linearly independent columns of A. (i.e. columns that can’t be derived from the other columns through addition). 1 -2 3 Q: What is the rank of this matrix? 2 -3 5 1 1 0 A: 2. The 1st is just the sum of the second two columns 1 -2 … we can represent as linear combination of 2 vectors: 2 -3 1 1

  30. Dimensionality reduction SVD-Based Embeddings -- try to represent with only p’ dimensions context words are features f1, f2, f3, f4, … fp c1, c2, c3, c4, … cp’ o1 o1 o2 o2 o3 o3 … … co-occurence counts are cells. o n o n target words are observations

  31. Dimensionality Reduction - PCA Linear approximates of data in r dimensions. Found via Singular Value Decomposition: T X [nxp] = U [nxr] D [rxr] V [pxr] X: original matrix, U: “left singular vectors”, D: “singular values” (diagonal), V: “right singular vectors”

  32. Dimensionality Reduction - PCA Linear approximates of data in r dimensions. Found via Singular Value Decomposition: T X [nxp] = U [nxr] D [rxr] V [pxr] X: original matrix, U: “left singular vectors”, D: “singular values” (diagonal), V: “right singular vectors” p p ≈ n X n

  33. Dimensionality Reduction - PCA - Example T X [nxp] = U [nxr] D [rxr] V [pxr] Word co-occurrence counts:

  34. Dimensionality Reduction - PCA - Example T X [nxp] ≅ U [nxr] D [rxr] V [pxr] target co-occ count with “nail” Observation: “beam.” count(beam, hit) = 100 -- horizontal dimension count(beam, nail) = 80 -- vertical dimension target co-occurence count with “hit”

  35. Dimensionality Reduction - PCA Linear approximates of data in r dimensions. Found via Singular Value Decomposition: T X [nxp] ≅ U [nxr] D [rxr] V [pxr] X: original matrix, U: “left singular vectors”, D: “singular values” (diagonal), V: “right singular vectors” Projection (dimensionality reduced space) in 3 dimensions: T ) (U [nx3] D [3x3] V [px3] To reduce features in new dataset, A: A [ m x p ] VD = A small[ m x 3 ]

  36. Dimensionality Reduction - PCA Linear approximates of data in r dimensions. Found via Singular Value Decomposition: T X [nxp] ≅ U [nxr] D [rxr] V [pxr] X: original matrix, U: “left singular vectors”, D: “singular values” (diagonal), V: “right singular vectors” To check how well the original matrix can be reproduced: Z [nxp] = U D V T , How does Z compare to original X? To reduce features in new dataset: A [ m x p ] VD = A small[ m x 3 ]

Recommend


More recommend