hmm dtw http cvsp cs ntua gr courses patrec
play

& HMM DTW - PowerPoint PPT Presentation

& HMM DTW http://cvsp.cs.ntua.gr/courses/patrec Forward Algorithm Backward Algorithm Probability Functions for Local State Estimation i


  1. Αναγνώριση Προτύπων & Αναγνώριση Φωνής Πετρος Μαραγκος HMM DTW http://cvsp.cs.ntua.gr/courses/patrec

  2. Forward Algorithm

  3. Backward Algorithm

  4. Probability Functions for Local State Estimation γ i argmax ( ) q t t 1 i N

  5. Εργοδικο Left-Right Serial Left-Right Parallel

  6. HMM: State Estimation, Viterbi Algorithm Viterbi Score * * O Q λ Pr( | , ) P

  7. Example of State Estimation via Viterbi Algorithm

  8. Probability Functions for HMM Parameter Estimation - I

  9. Probability Functions for HMM Parameter Estimation - II

  10. Reestimation of HMM Parameters

  11. HMM Continuous Densities

  12. HMM Parameter Estimation for Continuous Densities

  13. LPC Processor for Speech Recognition

  14. Probability Distributions of Cepstral Coefs of /zero/

  15. Dynamic Time Warping (DTW)

  16. HMM (Hidden Markov Models) • t = 1, 2, 3, …: Discrete Time HMM : λ = ( Α , Β , π ) • Ο = ( : Observation Sequence , ,..., ) O O O 1 2 T [ ], Pr at +1 | at • A = a a S t S t ij ij j i • T = Length of Observation Sequence State Transition Probability Matrix • Β = ( ) , ( ) Pr at | at b k b k v t S t • N = Number of States j j k j Observations Probability Distributions • M = # of Observation Symbols / Mixtures • π = , Pr at =1 q t i i i , , , S S S Initial State Probability • States 1 2 N

  17. Problems to Be Solved in HMM • Problem 1: Classification – Scoring ( Forward-Backward Algorithm ) and a model λ=(π, Α, Β), ( , , , ) Given an observed sequence O O O O 1 2 T Pr( | ) O compute likelihood • Problem 2: State Estimation ( Viterbi Algorithm ) Given an observed sequence estimate an optimum ( , , , ) O O O O 1 2 T * * ( , , , ) Q q q q Pr( , | ) O Q state sequence and compute the score 1 2 T • Problem 3: Training ( EM Algorithm ) Given an observed sequence ( , adjust model , , ) O O O O 1 2 T parameters λ=(π, Α, Β) to maximize likelihood Pr( | ) O

Recommend


More recommend