pr probability obability an and d ti time me
play

Pr Probability obability an and d Ti Time: me: Hidden dden - PowerPoint PPT Presentation

Pr Probability obability an and d Ti Time: me: Hidden dden Markov arkov Model odels s (H (HMMs) MMs) Computer Co ter Sc Scienc nce e cpsc sc322 322, , Lectu cture e 32 (Te Text xtbo book ok Chpt 6.5.2 5.2) Nov, No ,


  1. Pr Probability obability an and d Ti Time: me: Hidden dden Markov arkov Model odels s (H (HMMs) MMs) Computer Co ter Sc Scienc nce e cpsc sc322 322, , Lectu cture e 32 (Te Text xtbo book ok Chpt 6.5.2 5.2) Nov, No , 25, 2013 CPSC 322, Lecture 32 Slide 1

  2. Lecture cture Ov Overview rview • Re Recap ap • Markov Models • Markov Chain • Hidden dden Markov kov Models dels CPSC 322, Lecture 32 Slide 2

  3. Ans nswer wering ing Que ueries ies un unde der Unc ncertainty ertainty Probabi obabili lity ty Theor eory Dyna namic ic Bayesi esian an Network ork Stati atic c Belief f Ne Netwo work rk & Va & Variable iable El Elimination ination Hidde den n Mark arkov ov Mode dels ls Student dent Tracing acing in Monit nitori oring ng tutori oring ng Sy Syst stem ems Robot botics ics BioInf nform ormatics ics (e.g .g cred edit car ards) ds) Mar arkov kov Chains ains Natural ural Langua nguage ge Diagn gnostic ostic Proc ocessing ssing Syst stem ems s (e.g. g., medic dicine) ine) Email il spam pam filter ers CPSC 322, Lecture 32 Slide 3

  4. Sta tatio tiona nary ry Ma Markov kov Cha hain in (SMC) MC) A stati ation onary ry Markov ov Chain : for all t >0 • P ( S t+1 | S 0 ,…, S t ) = and • P ( S t +1 | We only need to specify and • Simple Model, easy to specify • Often the natural model • The network can extend indefinitely • Variati ation ons s of SMC C are at the core e of most t Na Natural ral Language ge Proce cessi ssing ng (NL NLP) ) applicat ations! ns! CPSC 322, Lecture 32 Slide 4

  5. Lecture cture Ov Overview rview • Re Recap ap • Markov Models • Markov Chain • Hidden dden Markov kov Models dels CPSC 322, Lecture 32 Slide 5

  6. How ow can an we e mi mini nima mally lly ex exte tend nd Ma Marko rkov v Cha hain ins? s? • Mainta tain inin ing the Markov ov and stat atio iona nary ry assumptions? A useful situation to model is the one in which: • the reasoning system does not have e acce cess ss to the states • but can make obser erva vatio tions ns that give some information about the current state CPSC 322, Lecture 32 Slide 6

  7. A. 2 2 x h B. B. h h x h C . C . k k x h D. D. k k x k CPSC 322, Lecture 32 Slide 7

  8. Hidden Markov Model • A Hi Hidden Markov kov Model l (HM HMM) M) starts with a Markov chain, and adds a noisy observation about the state at each time step: • |domain(S)| = k • |domain(O)| = h • P ( S 0 ) specifies initial conditions • P ( S t+1 | S t ) specifies the dynamics A. 2 2 x h B. h h x h • P ( O t | S t ) specifies the sensor model C . k C . k x h CPSC 322, Lecture 32 Slide 8 D. D. k k x k

  9. Hidden Markov Model • A Hi Hidden Markov kov Model l (HM HMM) M) starts with a Markov chain, and adds a noisy observation about the state at each time step: • |domain(S)| = k • |domain(O)| = h • P ( S 0 ) specifies initial conditions • P ( S t+1 | S t ) specifies the dynamics • P ( O t | S t ) specifies the sensor model CPSC 322, Lecture 32 Slide 9

  10. Example: mple: Localization for “Pushed around” Robot • Locali aliza zatio tion (where am I?) is a fundamental problem in roboti otics cs • Suppose a robot is in a circular corridor with 16 locations • There are four r doors s at positions: 2, 4, 7, 11 • The Robot initially doesn’t know where it is • The Robot is pushed ed around. After a push it can stay in the same location, move left or right. • The Robot has a No Noisy sensor or telling whether it is in front of a door CPSC 322, Lecture 32 Slide 10

  11. This scenario can be represented as… • Examp mple e Stoch chastic astic Dy Dynami mics cs: when pushed, it stays in the same location p=0.2, moves one step left or right with equal probability P(Loc t + 1 | Loc t ) A. Loc t = 10 B. C. C.

  12. This scenario can be represented as… • Examp mple e Stoch chastic astic Dy Dynami mics cs: when pushed, it stays in the same location p=0.2, moves left or right with equal probability P(Loc t + 1 | Loc t ) P(Loc 1 ) CPSC 322, Lecture 32 Slide 12

  13. This scenario can be represented as… Examp mple e of No Noisy senso sor r telling P(O t | Loc t ) whether it is in front of a door. • If it is in front of a door P(O t = T) = .8 • If not in front of a door P(O t = T) = .1 CPSC 322, Lecture 32 Slide 13

  14. Useful eful in infe ference ence in in HM HMMs Ms • Local aliz izati ation on: Robot starts at an unknown location and it is pushed around t times. It wants to determine where it is • In genera ral: l: compute the posterior distribution over the current state given all evidence to date P(S t | O 0 … O t ) CPSC 322, Lecture 32 Slide 14

  15. Example mple : : Rob obot ot Lo Local alizati ization on • Suppose a robot wants to determine its location based on its actions and its sensor readings • Three actions: goRight, goLeft, Stay • This can be represented by an augmented HMM CPSC 322, Lecture 32 Slide 15

  16. Rob obot ot Lo Local aliz ization ation Sen ensor or an and D d Dynamics namics Mo Mode del • Sa Sample le Se Sensor r Model l (assume same as for pushed around) • Sa Sample le St Stochastic chastic Dy Dynami mics cs: P(Loc t + 1 | Action t , Loc t ) P(Loc t + 1 = L | Action t = goRight , Loc t = L) = 0.1 P(Loc t + 1 = L+1 | Action t = goRight , Loc t = L) = 0.8 P(Loc t + 1 = L + 2 | Action t = goRight , Loc t = L) = 0.074 P(Loc t + 1 = L’ | Action t = goRight , Loc t = L) = 0.002 for all other locations L’ • All location arithmetic is modulo 16 • The action goLeft works the same but to the left CPSC 322, Lecture 32 Slide 16

  17. Dyna namics mics Mo Mode del Mo More e Det etai ails ls • Sample e Stoch chastic astic Dy Dynami mics cs: P(Loc t + 1 | Action, Loc t ) P(Loc t + 1 = L | Action t = goRight , Loc t = L) = 0.1 P(Loc t + 1 = L+1 | Action t = goRight , Loc t = L) = 0.8 P(Loc t + 1 = L + 2 | Action t = goRight , Loc t = L) = 0.074 P(Loc t + 1 = L’ | Action t = goRight , Loc t = L) = 0.002 for all other locations L’ CPSC 322, Lecture 32 Slide 17

  18. Rob obot ot Lo Local aliz izatio ation n ad addi diti tion onal al sen enso sor • Additi tion onal al Light Sensor: or: there is light coming through an opening at location 10 P (L t | Loc t ) • Info fo fro rom m the two wo sensors ors is combined ined :“Sensor Fusion” CPSC 322, Lecture 32 Slide 18

  19. Th The Ro Robot t starts rts at an unkno nown wn locati ation on and must t deter ermin ine where it is The model appears to be too ambiguous • Sensors are too noisy • Dynamics are too stochastic to infer anything But inference actually works pretty well. Let’s check: http://www.cs.ubc.ca/spider/poole/demos/localization /localization.html You can use standard Bnet inference. However you typically take advantage of the fact that time moves forward (not in 322) CPSC 322, Lecture 32 Slide 19

  20. Sampl mple e scenari enario o to to explore plore in demo mo • Keep making observations without moving. What happens? • Then keep moving without making observations. What happens? • Assume you are at a certain position alternate moves and observations • …. CPSC 322, Lecture 32 Slide 20

  21. HMMs have many other applications…. Natura ral l Langua uage ge Pr Proces essi sing ng: : e.g., Speech Recognition • States: phoneme \ word • Observations : acoustic signal \ phoneme Bioinfo Bi nform rmatics atics: Gene Finding • States: coding / non-coding region • Observations: DNA Sequences Fo For thes ese e problem lems s the critic itical al infere erenc nce e is: : find the most likely sequence of states given a sequence of observations CPSC 322, Lecture 32 Slide 21

  22. Markov kov Models dels Simplest Possible Markov Chains Dynamic Bnet Add noisy Observations Hidden Markov about the state Model at time t Add Actions and Markov Decision Values (Rewards) Processes (MDPs) CPSC 322, Lecture 32 Slide 22

  23. Learning Goals for today’s class Yo You u can an: • Specify the components of an Hidden Markov Model (HMM) • Justify and apply HMMs to Robot Localization Clarifi ifica catio ion n on secon ond LG for last st class ss Yo You can: • Justify and apply Markov Chains to compute the probability of a Natural Language sentence (NOT to estimate the conditional probs- slide 18) CPSC 322, Lecture 32 Slide 23

  24. Next xt week ek Enviro ronm nmen ent Stocha chast stic Determini De rminist stic Problem Pr em Arc Co Consisten stency cy Search ch Constr trai aint nt Vars s + Satisfactio sfaction Co Constr traints aints SLS Stati atic Belief f Nets Logic ics Query ry Var. . Eliminat ation Search ch Mar arkov kov Chains ains and HMMs Ms De Decision on Ne Nets Sequenti ntial al STRI RIPS Var. . Eliminat ation Planning Search ch Mar arkov kov Decisi cision on Proc ocess esses es Re Representatio sentation Valu lue Iterat teration ion Reasoning Re ing CPSC 322, Lecture 32 Slide 24 Techniqu nique

  25. Next xt Class ass • One ne-off off de decis isions ions (TextBook 9.2) • Sin ingl gle e Sta tage ge Dec ecision ision ne netw tworks orks ( 9.2.1) CPSC 322, Lecture 32 Slide 25

Recommend


More recommend