hmm t utorial t apas kan ungo 1 c hidden mark o v mo dels
play

' $ HMM T utorial/ T apas Kan ungo{1 c Hidden Mark o v - PDF document

' $ HMM T utorial/ T apas Kan ungo{1 c Hidden Mark o v Mo dels T apas Kan ungo Cen ter for Automation Researc h Univ ersit y of Maryland W eb: www.cfar.umd.edu/~k an ungo Email: k an ungo@cfar.umd.edu


  1. ' $ HMM T utorial/ T apas Kan ungo{1 � c Hidden Mark o v Mo dels T apas Kan ungo Cen ter for Automation Researc h Univ ersit y of Maryland W eb: www.cfar.umd.edu/~k an ungo Email: k an ungo@cfar.umd.edu & %

  2. ' $ HMM T utorial/ T apas Kan ungo{2 � c Outline 1. Mark o v mo dels 2. Hidden Mark o v mo dels 3. F orw ard/Bac kw ard algorithm 4. Viterbi algorithm 5. Baum-W elc h estimation algorithm & %

  3. ' $ HMM T utorial/ T apas Kan ungo{3 � c Mark o v Mo dels � Observ able states: 1 ; 2 ; : : : ; N � Observ ed sequence: q ; q ; : : : ; q ; : : : ; q 1 2 t T � First order Mark o v assumption: ( q = j q = = ) = ( q = j q = i ) P j i; q k ; : : : P j t t � 1 t � 2 t t � 1 � Stationarit y: P ( q = j j q = i ) = P ( q = j j q = i ) t t � 1 t + l t + l � 1 & %

  4. ' $ HMM T utorial/ T apas Kan ungo{4 � c Mark o v Mo dels � State transition matrix : A 2 3 � � � � � � a a a a 6 1 j 1 N 7 11 12 6 7 6 7 6 7 6 7 � � � � � � a a a a 6 7 2 j 2 N 21 22 6 7 6 7 6 7 . . . . 6 7 . . . . . . � � � . � � � . 6 7 6 7 6 7 = A 6 7 6 7 6 7 a a � � � a � � � a 6 7 i 1 i 2 ij iN 6 7 6 7 6 7 . . . . 6 7 . . . . . . � � � . � � � . 6 7 6 7 6 7 6 7 6 7 6 7 � � � � � � a a a a 4 5 N 1 N 2 N j N N where = ( q = j q = i ) 1 � � a P j i; j; N ij t t � 1 � Constrain ts on a : ij a � 0 ; 8 i; j ij N = 1 ; 8 i X a ij j =1 & %

  5. ' $ HMM T utorial/ T apas Kan ungo{5 � c Mark o v Mo dels: Example � States: 1. Rain y ( R ) 2. Cloudy ( C ) 3. Sunn y ( S ) � State transition probabilit y matrix: 2 3 0 : 4 0 : 3 0 : 3 6 7 6 7 6 7 6 7 = A 6 7 0 : 2 0 : 6 0 : 2 6 7 6 7 6 7 6 7 6 7 0 : 1 0 : 1 0 : 8 4 5 � Compute the probabilit y of observing giv en that to da y is . S S R R S C S S & %

  6. ' $ HMM T utorial/ T apas Kan ungo{6 � c Mark o v Mo dels: Example Basic conditional probabilit y rule: ( A; ) = ( A j B ) P ( B ) P B P The Mark o v c hain rule: ( q ) P ; q ; : : : ; q 1 2 T = ( q j q ) P ( q ) P ; q ; : : : ; q ; q ; : : : ; q T T T 1 2 � 1 1 2 � 1 = ( q j q ) P ( q ) P ; q ; : : : ; q T T T � 1 1 2 � 1 = ( q j q ) P ( q j q ) P ( q ) P ; q ; : : : ; q T T T T T � 1 � 1 � 2 1 2 � 2 = ( q j q ) P ( q j q ) � � � ( q j q ) P ( q ) P P T T T T � 1 � 1 � 2 2 1 1 & %

  7. ' $ HMM T utorial/ T apas Kan ungo{7 � c Mark o v Mo dels: Example � Observ ation sequence : O = ( S; ) O S; S; R ; R ; S; C ; S � Using the c hain rule w e get: P ( O j model ) = ( S; j model ) P S; S; R ; R ; S; C ; S = ( S ) P ( S j S ) P ( S j S ) P ( R j S ) P ( R j R ) � P ( S j R ) P ( C j S ) P ( S j C ) P = � a a a a a a a 3 33 33 31 11 13 32 23 2 = (1)(0 : 8) (0 : 1)(0 : 4)(0 : 3)(0 : 1)(0 : 2) � 4 = 1 : 536 � 10 � The prior probabilit y � = P ( q = i ) i 1 & %

  8. ' $ HMM T utorial/ T apas Kan ungo{8 � c Mark o v Mo dels: Example � What is the probabilit y that the sequence re- mains in state for exactly time units? i d ( d ) = ( q = = = 6 = : ) p P i; q i; : : : ; q i; q i; : : i 1 2 d d +1 d � 1 = � ( a ) (1 � a ) i ii ii � Exp onen tial Mark o v c hain duration densit y . � What is the exp ected v alue of the duration in d state i ? 1 � d = dp ( d ) X i i d =1 1 d � 1 = X d ( a ) (1 � a ) ii ii d =1 1 d � 1 = (1 � a ) d ( a ) X ii ii d =1 @ 1 d = (1 � ) ( a ) a X ii ii @ a d =1 ii @ a 0 1 ii = (1 � ) a ii @ A 1 � @ a a ii ii 1 = 1 � a ii & %

  9. ' $ HMM T utorial/ T apas Kan ungo{9 � c Mark o v Mo dels: Example � Avg. n um b er of consecutiv e sunn y da ys = 1 1 = = 5 1 � a 1 � 0 : 8 33 � Avg. n um b er of consecutiv e cloudy da ys = 2.5 � Avg. n um b er of consecutiv e rain y da ys = 1.67 & %

  10. ' $ HMM T utorial/ T apas Kan ungo{10 � c Hidden Mark o v Mo dels � States are not observ able � Observ ations are probabilistic functions of state � State transitions are still probabilistic & %

  11. ' $ HMM T utorial/ T apas Kan ungo{11 � c Urn and Ball Mo del � urns con taining colored balls N � distinct colors of balls M � Eac h urn has a (p ossibly) di�eren t distribution of colors � Sequence generation algorithm: 1. Pic k initial urn according to some random pro cess. 2. Randomly pic k a ball from the urn and then replace it 3. Select another urn according a random selec- tion pro cess asso ciated with the urn 4. Rep eat steps 2 and 3 & %

  12. ' $ HMM T utorial/ T apas Kan ungo{12 � c The T rellis N STATES 4 3 2 1 1 2 t-1 t t+1 t+2 T-1 T TIME o o o o o o o o 1 2 t-1 t t+1 t+2 T-1 T OBSERVATION & %

  13. ' $ HMM T utorial/ T apas Kan ungo{13 � c Elemen ts of Hidden Mark o v Mo dels � { the n um b er of hidden states N � { set of states = f 1 ; 2 ; g Q Q : : : ; N � M { the n um b er of sym b ols � { set of sym b ols = f 1 ; 2 ; g V V : : : ; M � A { the state-transition probabilit y matrix. = ( q = j q = i ) 1 � � a P j i; j; N ij t +1 t � { Observ ation probabilit y distribution: B B ( k ) = P ( o = k j q = j ) 1 � k � M j t t � { the initial state distribution: � = ( q = i ) 1 � � � P i N i 1 � { the en tire mo del = ( A; ) � � B ; � & %

  14. ' $ HMM T utorial/ T apas Kan ungo{14 � c Three Basic Problems 1. Giv en observ ation = ( o ) and mo del O ; o ; : : : ; o T 1 2 = ( A; ) ; e�cien tly compute ( O j � ) : � B ; � P � Hidden states complicate the ev aluation � Giv en t w o mo dels � and � , this can b e used 1 2 to c ho ose the b etter one. 2. Giv en observ ation = ( o ) and mo del O ; o ; : : : ; o � 1 2 T �nd the optimal state sequence = ( q ) : q ; q ; : : : ; q T 1 2 � Optimalit y criterion has to b e decided (e.g. maxim um lik eliho o d) � \Explanation" for the data. 3. Giv en O = ( o ; o ; : : : ; o ) ; estimate mo del parame- 1 2 T ters � = ( A; B ; � ) that maximize P ( O j � ) : & %

  15. ' $ HMM T utorial/ T apas Kan ungo{15 � c Solution to Problem 1 � Problem: Compute ( o j � ) P ; o ; : : : ; o T 1 2 � Algorithm: { Let = ( q ) b e a state sequence. q ; q ; : : : ; q T 1 2 { Assume the observ ations are indep enden t: T ( O j q � ) = ( o j q � ) P ; Y P ; t t i =1 = ( o ) b ( o ) � � � ( o ) b b q q q T 1 2 1 2 T { Probabilit y of a particular state sequence is: ( q j � ) = � � � P � a a a q q q q q q q 1 1 2 2 3 T � 1 T { Also, P ( O ; q j � ) = P ( O j q ; � ) P ( q j � ) { En umerate paths and sum probabilities: ( O j � ) = ( O j q � ) P ( q j � ) P X P ; q T � N state sequences and O ( T ) calculations. T Complexit y: O ( T N ) calculations. & %

  16. ' $ HMM T utorial/ T apas Kan ungo{16 � c F orw ard Pro cedure: In tuition N aNk STATES a3k 3 k 2 a1K 1 t+1 t TIME & %

  17. ' $ HMM T utorial/ T apas Kan ungo{17 � c F orw ard Algorithm � De�ne forw ard v ariable ( i ) as: � t ( i ) = ( o = i j � ) � P ; o ; : : : ; o ; q t t t 1 2 � � ( i ) is the probabilit y of observing the partial t sequence ( o ; o : : : ; o ) suc h that the state q is i . 1 2 t t � Induction: 1. Initialization: ( i ) = ( o ) � � b 1 i i 1 2. Induction: 2 N 3 ( j ) = ( i ) a ( o ) � X � b t +1 t ij j t +1 4 5 i =1 3. T ermination: N ( O j � ) = ( i ) P X � T i =1 2 � Complexit y: O ( N T ) : & %

  18. ' $ HMM T utorial/ T apas Kan ungo{18 � c Example Consider the follo wing coin-tossing exp erimen t: State 1 State 2 State 3 P(H) 0.5 0.75 0.25 P(T) 0.5 0.25 0.75 { state-transition probabilities equal to 1/3 { initial state probabilities equal to 1/3 1. Y ou observ e O = ( H ; H ; H ; H ; T ; H ; T ; T ; T ; T ) : What state sequence, q ; is most lik ely? What is the join t probabilit y , ( O j � ) ; of the observ ation se- P ; q quence and the state sequence? 2. What is the probabilit y that the observ ation se- quence came en tirely of state 1? & %

Recommend


More recommend