machinetranslation p strong winds p large winds
play

Machinetranslation p( strong winds) > - PowerPoint PPT Presentation

Machinetranslation p( strong winds) > p( large winds) SpellCorrection The office is about fifteen minuets from my house p(about fifteen minutes from) > p(about fifteen


  1. ▪ ▪ ▪ ▪ ▪ ▪

  2. ▪ ▪ ▪

  3. ▪ Machinetranslation ▪ p( strong winds) > p( large winds) ▪ SpellCorrection ▪ The office is about fifteen minuets from my house ▪ p(about fifteen minutes from) > p(about fifteen minuets from) ▪ Speech Recognition ▪ p(I saw a van) >> p(eyes awe of an) ▪ Summarization, question-answering, handwriting recognition, OCR, etc.

  4. W A noisy channel source observed best decoder w a ▪ We want to predict a sentence given acoustics:

  5. W A noisy channel source observed best decoder w a ▪ The noisy-channel approach: Likelihood Prior Language model: Distributions over sequences Acoustic model (HMMs) of words (sentences)

  6. Language Model Acoustic Model source channel w a P(w) P(a|w) the station signs are in deep in english the stations signs are in deep in english observed the station signs are in deep into english best decoder the station 's signs are in deep in english w a the station signs are in deep in the english the station 's signs are in deep in english the station signs are indeed in english the station 's signs are indeed in english the station signs are indians in english argmax P(w|a) = argmax P(a|w)P(w) the station signs are indian in english the stations signs are indians in english the stations signs are indians and english

  7. Language Model Translation Model sent transmission: recovered transmission: English French channel source e f P(e) P(f|e) observed best decoder e f recovered message: English’ argmax P(e|f) = argmax P(f|e)P(e) e e

  8. the station signs are in deep in english -14732 the stations signs are in deep in english -14735 the station signs are in deep into english -14739 the station 's signs are in deep in english -14740 the station signs are in deep in the english -14741 the station signs are indeed in english -14757 the station 's signs are indeed in english -14760 the station signs are indians in english -14790 the station signs are indian in english -14799 the stations signs are indians in english -14807 the stations signs are indians and english -14815

  9. ▪ A language model is a distribution over sequences of words (sentences) ▪ What’s w? (closed vs open vocabulary) ▪ What’s n? (must sum to one over all lengths) ▪ Can have rich structure or be linguistically naive ▪ Why language models? ▪ Usually the point is to assign high weights to plausible sentences (cf acoustic confusions) ▪ This is not the same as modeling grammaticality

  10. ▪ Language models are distributions over sentences ▪ N-gram models are built from local conditional probabilities ▪ The methods we’ve seen are backed by corpus n-gram counts

  11. ▪ ▪ ▪

  12. ▪ ▪ ▪

  13. ▪ ▪ ▪

  14. ▪ ▪ ▪ ▪

  15. ▪ ▪ ▪ ▪ ▪

  16. ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

  17. ▪ ▪ ▪ ⇒ ▪ ⇒

  18. ▪ unk unk ▪ unk ▪

  19. ▪ ▪ ▪ ▪ ▪ ▪ ▪

  20. Held-Out Test Training Data Data Data Counts / parameters from Hyperparameters Evaluate here here from here

  21. ▪ We often want to make estimates from sparse statistics: P(w | denied the) 3 allegations allegations 2 reports reports charges benefits claims motion 1 claims … requ est 1 request 7 total ▪ Smoothing flattens spiky distributions so they generalize better: P(w | denied the) 2.5 allegations allegations 1.5 reports allegations charges benefits 0.5 claims motion reports … 0.5 request clai ues ms req t 2 other 7 total ▪ Very important all over NLP, but easy to do badly

  22. ▪ ▪ ▪ ▪ ▪ ▪

  23. ▪ ▪ ▪ ▪ ▪

  24. ▪ ▪ ▪ ▪ ▪ ▪ ▪

  25. ▪ ▪ ▪ ▪ ▪ ▪

  26. ▪ ▪ ▪ ▪ ▪

  27. ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

  28. The LAMBADA dataset Context: “Why?” “I would have thought you’d find him rather dry,” she said. “I don’t know about that,” said Gabriel. “He was a great craftsman,” said Heather. “That he was,” said Flannery. Target sentence: “And Polish, to boot,” said _______. Target word: Gabriel [Paperno et al. 2016]

  29. Other Techniques? ▪ Lots of other techniques ▪ Maximum entropy LMs ▪ Neural network LMs (soon) ▪ Syntactic / grammar-structured LMs (later)

  30. How to Build an LM

  31. ▪ Good LMs need lots of n-grams! [Brants et al, 2007]

  32. ▪ Key function: map from n-grams to counts … searching for the best 192593 searching for the right 45805 searching for the cheapest 44965 searching for the perfect 43959 searching for the truth 23165 searching for the “ 19086 searching for the most 15512 searching for the latest 12670 searching for the next 10120 searching for the lowest 10080 searching for the name 8402 searching for the finest 8171 …

  33. https://ai.googleblog.com/2006/08/all-our-n-gram-are-belong-to-you.html

  34. ● 24GB compressed ● 6 DVDs

  35. source channel w a P(w) P(a|w) the station signs are in deep in english the stations signs are in deep in english observed the station signs are in deep into english best decoder the station 's signs are in deep in english w a the station signs are in deep in the english the station 's signs are in deep in english the station signs are indeed in english the station 's signs are indeed in english the station signs are indians in english argmax P(w|a) = argmax P(a|w)P(w) the station signs are indian in english the stations signs are indians in english the stations signs are indians and english

  36. 0 1 key value c(cat) = 12 hash(cat) = 2 the 87 2 cat 12 3 c(the) = 87 hash(the) = 2 4 5 and 76 c(and) = 76 hash(and) = 5 6 dog 11 7 c(dog) = 11 hash(dog) = 7 c(have) = ? hash(have) = 2

  37. HashMap<String, Long> ngram_counts; String ngram1 = “I have a car”; String ngram2 = “I have a cat”; ngram_counts.put(ngram1, 123); ngram_counts.put(ngram2, 333);

  38. HashMap<String[], Long> ngram_counts; String[] ngram1 = {“I”, “have”, “a”, “car”}; String[] ngram2 = {“I”, “have”, “a”, “cat”}; ngram_counts.put(ngram1, 123); ngram_counts.put(ngram2, 333);

  39. Per 3-gram: 1 Pointer = 8 bytes 1 Map.Entry = 8 bytes (obj) +3x8 bytes (pointers) HashMap<String[], Long> ngram_counts; 1 Long = 8 bytes (obj) + 8 bytes (long) 1 String[] = 8 bytes (obj) + + 3x8 bytes (pointers) … at best Strings are canonicalized Total: > 88 bytes 4 billion ngrams * 88 bytes = 352 GB Obvious alternatives: - Sorted arrays - Open addressing at c

  40. key value c(cat) = 12 hash(cat) = 2 0 1 c(the) = 87 hash(the) = 2 2 3 c(and) = 76 hash(and) = 5 4 5 c(dog) = 11 hash(dog) = 7 6 7

  41. key value c(cat) = 12 hash(cat) = 2 0 1 c(the) = 87 hash(the) = 2 2 cat 12 3 the 87 c(and) = 76 hash(and) = 5 4 5 and 5 c(dog) = 11 hash(dog) = 7 6 7 dog 7 c(have) = ? hash(have) = 2

  42. key value 0 c(cat) = 12 hash(cat) = 2 1 2 c(the) = 87 hash(the) = 2 3 4 c(and) = 76 hash(and) = 5 5 6 c(dog) = 11 hash(dog) = 7 7 … … … 14 15

  43. ▪ Closed address hashing ▪ Resolve collisions with chains ▪ Easier to understand but bigger ▪ Open address hashing ▪ Resolve collisions with probe sequences ▪ Smaller but easy to mess up ▪ Direct-address hashing ▪ No collision resolution ▪ Just eject previous entries ▪ Not suitable for core LM storage

  44. HashMap<String[], Long> ngram_counts; Per 3-gram: 1 Pointer = 8 bytes 1 Map.Entry = 8 bytes (obj) +3x8 bytes (pointers) 1 Long = 8 bytes (obj) + 8 bytes (long) 1 String[] = 8 bytes (obj) + + 3x8 bytes (pointers) … at best Strings are canonicalized Total: > 88 bytes Obvious alternatives: - Sorted arrays - Open addressing

  45. word ids 7 1 15 the cat laughed 233 n-gram count

  46. Got 3 numbers under 2 20 to store? 7 1 15 0 … 00111 0...00001 0...01111 20 bits 20 bits 20 bits Fits in a primitive 64-bit long

  47. n-gram encoding 15176595 = the cat laughed 233 n-gram count 32 bytes → 8 bytes

  48. HashMap<String[], Long> ngram_counts; Per 3-gram: 1 Pointer = 8 bytes 1 Map.Entry = 8 bytes (obj) +3x8 bytes (pointers) 1 Long = 8 bytes (obj) + 8 bytes (long) 1 String[] = 8 bytes (obj) + + 3x8 bytes (pointers) … at best Strings are canonicalized Total: > 88 bytes Obvious alternatives: - Sorted arrays - Open addressing

  49. c(the) = 23135851162 < 2 35 35 bits to represent integers between 0 and 2 35 60 bits 35 bits 15176595 233 n-gram encoding count

  50. ● 24GB compressed ● 6 DVDs

  51. # unique counts = 770000 < 2 20 20 bits to represent ranks of all counts rank count 60 bits 20 bits 0 1 1 2 15176595 3 2 51 n-gram encoding rank 3 233

  52. Vocabulary N-gram encoding scheme unigram: f(id) = id bigram: f(id 1 , id 2 ) = ? trigram: f(id 1 , id 2 , id 3 ) = ? Count DB unigram bigram trigram Counts lookup

  53. ▪ ▪

  54. [Many details from Pauls and Klein, 2011]

  55. Compression

Recommend


More recommend