Αναγνώριση Προτύπων & Αναγνώριση Φωνής Πετρος Μαραγκος HMM DTW http://cvsp.cs.ntua.gr/courses/patrec
Forward Algorithm
Backward Algorithm
Probability Functions for Local State Estimation γ i argmax ( ) q t t 1 i N
Εργοδικο Left-Right Serial Left-Right Parallel
HMM: State Estimation, Viterbi Algorithm Viterbi Score * * O Q λ Pr( | , ) P
Example of State Estimation via Viterbi Algorithm
Probability Functions for HMM Parameter Estimation - I
Probability Functions for HMM Parameter Estimation - II
Reestimation of HMM Parameters
HMM Continuous Densities
HMM Parameter Estimation for Continuous Densities
LPC Processor for Speech Recognition
Probability Distributions of Cepstral Coefs of /zero/
Dynamic Time Warping (DTW)
HMM (Hidden Markov Models) • t = 1, 2, 3, …: Discrete Time HMM : λ = ( Α , Β , π ) • Ο = ( : Observation Sequence , ,..., ) O O O 1 2 T [ ], Pr at +1 | at • A = a a S t S t ij ij j i • T = Length of Observation Sequence State Transition Probability Matrix • Β = ( ) , ( ) Pr at | at b k b k v t S t • N = Number of States j j k j Observations Probability Distributions • M = # of Observation Symbols / Mixtures • π = , Pr at =1 q t i i i , , , S S S Initial State Probability • States 1 2 N
Problems to Be Solved in HMM • Problem 1: Classification – Scoring ( Forward-Backward Algorithm ) and a model λ=(π, Α, Β), ( , , , ) Given an observed sequence O O O O 1 2 T Pr( | ) O compute likelihood • Problem 2: State Estimation ( Viterbi Algorithm ) Given an observed sequence estimate an optimum ( , , , ) O O O O 1 2 T * * ( , , , ) Q q q q Pr( , | ) O Q state sequence and compute the score 1 2 T • Problem 3: Training ( EM Algorithm ) Given an observed sequence ( , adjust model , , ) O O O O 1 2 T parameters λ=(π, Α, Β) to maximize likelihood Pr( | ) O
Recommend
More recommend