introduction to statistical speech recognition
play

Introduction to Statistical Speech Recognition Lecture 1 CS 753 - PowerPoint PPT Presentation

Introduction to Statistical Speech Recognition Lecture 1 CS 753 Instructor: Preethi Jyothi Course Plan (I) Cascaded ASR System S peech Wavefor m - Acoustic Model ( AM ) Acoustic Analysis - Pronunciation Model ( PM ) Frame # Ac Acoustic


  1. Introduction to Statistical Speech Recognition Lecture 1 CS 753 Instructor: Preethi Jyothi

  2. Course Plan (I) • Cascaded ASR System S peech Wavefor m - Acoustic Model ( AM ) Acoustic Analysis - Pronunciation Model ( PM ) Frame # Ac Acoustic Acous oustic Fea tic Feature Features tures 1 2 - Language Model ( LM ) Acoustic Features 3 4 AM 5 • Weighted Finite State Transducers for ASR : ay • AM : HMMs, DNN and RNN-based models d k WORD PRON • PM : Phoneme and Grapheme-based good g uh d Acoustic model DECODER like l ay k models is ih z NGRAMS SCORE Good prose 2.5 Pronunciation model like 0.7 • LM : Ngram models (+smoothing), PM is like 1.2 is like a 0.8 RNNLMs Grammar (language) model LM good prose is like a windowpane • Decoding Algorithms, Lattices

  3. Course Plan (II) Speller y 2 y 3 y 4 h eos i • End-to-end Neural Models for ASR - CTC loss function c 1 c 2 - Encoder-decoder Architectures with Attention h h h • Speaker Adaptation s 1 s 2 • Speech Synthesis y S − 1 y 2 y 3 h sos i • Recent Generative Models (GANs, VAEs) for h = ( h 1 , . . . , h U ) Speech Processing Listener h 1 h U Check www.cse.iitb.ac.in/~pjyothi/cs753 for latest updates 
 Moodle will be used for assignment/project-related submissions 
 and all announcements x 1 x 2 x 3 x 4 x T Image from: Chan et al., Listen, Attend and Spell: A NN for LVCSR, ICASSP 2016 ����

  4. Other Course Info • Teaching Assistants (TAs): - Vinit Unni (vinit AT cse) - Saiteja Nalla (saitejan AT cse) - Naman Jain (namanjain AT cse) • TA o ffi ce hours: Wednesdays, 10 am to 12 pm (tentative) 
 Instructor 1-1: Email me to schedule a time • Readings: - No fixed textbook. “Speech and Language Processing” by Jurafsky and Martin serves as a good starting point. - All further readings will be posted online. • Audit requirements: Complete all assignments/quizzes and score � 40% ≥

  5. Course Evaluation • 3 Assignments OR 2 Assignments + 1 Quiz 35% • At least one programming assignment - Set up ASR system based on a recipe & improve said recipe • Midsem Exam + Final Exam 15% + 25% • Final Project 20% • Participation 5% Attendance Policy? Strongly advised to attend lectures. 
 Also, participation points hinges on it.

  6. Academic Integrity Policy 
 Assignments/Exams • Always cite your sources (be it images, papers or existing code repos). 
 Follow proper citation guidelines. • Unless specifically permitted, collaborations are not allowed. • Do not copy or plagiarise. Will incur significant penalties.

  7. Academic Integrity Policy 
 Assignments/Exams • Always cite your sources (be it images, papers or existing code repos). 
 Follow proper citation guidelines. • Unless specifically permitted, collaborations are not allowed. • Do not copy or plagiarise. Will incur significant penalties.

  8. Final Project • Projects can be on any topic related to speech/audio processing. 
 Check website for abstracts from a previous o ff ering. • No individual projects and no more than 3 members in a team. • Preliminary Project Evaluation: Short report detailing project statement, 
 SEP 1-7 goals, specific tasks and preliminary experiments • Final Evaluation: NOV 7-14 - Presentation (Oral or poster session, depending on final class strength) - Report (Use ML conference style files & provide details about the project) • Excellent Projects: - Will earn extra credit that counts towards the final grade - Can be turned into a research paper

  9. #1: Speech-driven Facial Animation https://arxiv.org/pdf/1906.06337.pdf, June 2019 Videos from: https://sites.google.com/view/facial-animation

  10. #2: Speech2Gesture https://arxiv.org/abs/1906.04160, CVPR 2019 Image from: http://people.eecs.berkeley.edu/~shiry/projects/speech2gesture/

  11. #3: Decoding Brain Signals Into Speech https://www.nature.com/articles/s41586-019-1119-1, April 2019

  12. Introduction to ASR

  13. Automatic Speech Recognition • Problem statement: Transform a spoken utterance into a sequence of tokens (words, syllables, phonemes, characters) • Many downstream applications of ASR. Examples: - Speech understanding - Spoken translation - Audio information retrieval • Speech demonstrates variabilities at multiple levels: Speaker style, accents, room acoustics, microphone properties, etc.

  14. History of ASR RADIO REX (1922)

  15. History of ASR SHOEBOX (IBM, 1962) 1 word Freq. detector 2002 2012 1962 1922 1932 1942 1952 1972 1982 1992

  16. History of ASR HARPY (CMU, 1976) 1 word 16 words Freq. Isolated word 
 detector recognition 2002 2012 1962 1922 1932 1942 1952 1972 1982 1992

  17. History of ASR HIDDEN MARKOV MODELS 
 (1980s) 1 word 16 words 1000 words Freq. Isolated word 
 Connected detector recognition speech 2002 2012 1962 1922 1932 1942 1952 1972 1982 1992

  18. History of ASR Cortana Siri DEEP NEURAL NETWORK BASED SYSTEMS (>2010) 1 word 16 words 1000 words 10K+ words Freq. Isolated word 
 Connected LVCSR detector recognition speech systems 2002 2012 1962 1922 1932 1942 1952 1972 1982 1992

  19. How are ASR systems evaluated? • Error rates computed on an unseen test set by comparing W* (decoded sentence) against W ref (reference sentence) for each test utterance - Sentence/Utterance error rate (trivial to compute!) - Word/Phone error rate • Word/Phone error rate (ER) uses the Levenshtein distance measure: What are the minimum number of edits (insertions/deletions/substitutions) required to convert W * to W ref ? On a test set with N instances: P N j =1 Ins j + Del j + Sub j ER = P N j =1 ` j Ins j , Del j , Sub j are number of insertions/deletions/substitutions in the j th ASR output is the total number of words/phones in the j th reference ` j

  20. Remarkable progress in ASR in the last decade 100% Switchboard Conversational Speech Meeting Speech (Non-English) Meeting – SDM OV4 Read Meeting – MDM OV4 Speech Switchboard II Broadcast CTS Arabic (UL) Switchboard Cellular Speech CTS Mandarin (UL)0 Meeting - IHM Air Travel News Mandarin 10X Planning Kiosk Varied (Non-English) Speech News Arabic 10X Microphones CTS Fisher (UL) WER (in %) 20k News English 1X News English unlimited 10% Noisy News English 10X 5k 1k 4% 2% 1% 2018 Image from: http://www.itl.nist.gov/iad/mig/publications/ASRhistory/

  21. Statistical Speech Recognition Pioneer of ASR technology, Fred Jelinek (1932 - 2010): Cast ASR as a channel coding problem. Let � be a sequence of acoustic features corresponding to a speech signal. O O i ∈ ℝ d That is, � , where � refers to a d-dimensional O = { O 1 , …, O T } acoustic feature vector and � is the length of the sequence. T Let � denote a word sequence. An ASR decoder solves the foll. problem: W W * = arg max W Pr( W | O ) Language Model = arg max W Pr( O | W ) Pr( W ) Acoustic Model

  22. Simple example of isolated word ASR • Task: Recognize utterances which consist of speakers saying either “up" or “down" or “left” or “right” per recording. • Vocabulary: Four words, “up”, “down”, “left”, “right” • Data splits - Training data: 30 utterances - Test data: 20 utterances • Acoustic model: Let’s parameterize � using a Markov model Pr θ ( O | W ) with parameters � . θ

  23. � � Word-based acoustic model a 33 a 11 a 22 a 01 a 12 a 23 a 34 Model for 1 2 3 0 4 “up” b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T Transition probabilities going from state i to state j a ij → Probability of generating � from state j b j (O i ) → O i Pr( O | "up" ) = ∑ Compute � Pr( O , Q | "up" ) Efficient algorithm exists. Will appear in a later class. Q

  24. Isolated word recognition a 33 a 11 a 22 Pr( O | "up" ) a 01 a 12 a 23 a 34 up 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T a 33 a 11 a 22 a 01 a 12 a 23 a 34 down Pr( O | "down" ) 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T Compute � arg max w Pr( O | w ) acoustic 
 a 33 a 11 a 22 features 
 a 01 left a 12 a 23 a 34 O 1 2 3 0 4 Pr( O | "left" ) b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T a 33 a 11 a 22 a 01 a 12 a 23 a 34 right 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) Pr( O | "right" ) .... O 1 O 2 O 3 O 4 O T

  25. Small tweak • Task: Recognize utterances which consist of speakers saying either “up" or “down" multiple times per recording. a 33 a 11 a 22 a 01 a 12 a 23 a 34 up 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T a 33 a 11 a 22 a 01 a 12 a 23 a 34 down 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T

  26. Small tweak • Task: Recognize utterances which consist of speakers saying either “up" or “down" multiple times per recording. a 33 a 11 a 22 a 01 a 12 a 23 a 34 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T Search within this graph a 33 a 11 a 22 a 01 a 12 a 23 a 34 1 2 3 0 4 b 3 ( ) b 1 ( ) b 2 ( ) .... O 1 O 2 O 3 O 4 O T

Recommend


More recommend