reasoning under uncertainty over time
play

Reasoning Under Uncertainty Over Time Alice Gao Lecture 16 - PowerPoint PPT Presentation

1/25 Reasoning Under Uncertainty Over Time Alice Gao Lecture 16 Readings: R & N 15.1 to 15.3 Based on work by K. Leyton-Brown, K. Larson, and P. van Beek 2/25 Outline Learning Goals Revisiting the Learning goals 3/25 Learning Goals


  1. 1/25 Reasoning Under Uncertainty Over Time Alice Gao Lecture 16 Readings: R & N 15.1 to 15.3 Based on work by K. Leyton-Brown, K. Larson, and P. van Beek

  2. 2/25 Outline Learning Goals Revisiting the Learning goals

  3. 3/25 Learning Goals By the end of the lecture, you should be able to likely explanation given a hidden Markov model. ▶ Construct a hidden Markov model given a real-world scenario. ▶ Perform fjltering, prediction, smoothing and derive the most

  4. 4/25 Inference in a Changing World So far, we can reason probabilistically in a static world. However, the world evolves over time. Applications: ▶ weather predictions ▶ stock market predictions ▶ patient monitoring ▶ robot localization ▶ speech and handwriting recognition

  5. 5/25 The Umbrella World You are the security guard stationed at a secret underground installation. You want to know whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.

  6. 6/25 States and Observations some observable, some not. What are the observable and unobservable random variables in the umbrella world? ▶ The world contains a series of time slices. ▶ Each time slice contains a set of random variables, X t the un-observable variables at time t E t the observable variables at time t

  7. 7/25 The transition model How does the current state depend on the previous states? Problem: As t increases, the number of previous states is unbounded. The conditional probability distribution can be unboundedly large. Solutions: Make the Markov assumption — the current state depends on only a fjnite fjxed number of previous states. P ( X t | X t − 1 ∧ X t − 2 ∧ X t − 3 ∧ · · · ∧ X 1 )

  8. 8/25 K-order Markov processes First-order Markov process: X t The transition model: Second-order Markov process: X t The transition model: X t + 1 X t − 2 X t − 1 P ( X t | X t − 1 ∧ X t − 2 ∧ X t − 3 ∧ · · · ∧ X 1 ) = P ( X t | X t − 1 ) X t + 1 X t − 2 X t − 1 P ( X t | X t − 1 ∧ X t − 2 ∧ X t − 3 ∧ · · · ∧ X 1 ) = P ( X t | X t − 1 ∧ X t − 2 )

  9. 9/25 Transition model for the umbrella world The future is independent of the past given the present. R t The transition model: R t + 1 R t − 2 R t − 1 P ( R t | R t − 1 ∧ R t − 2 ∧ R t − 3 ∧ · · · ∧ R 1 ) = P ( R t | R t − 1 )

  10. 10/25 Stationary Process Is there a difgerent conditional probability distribution for each time step? Stationary process: remains the same. ▶ The dynamics does not change over time. ▶ The conditional probability distribution for each time step

  11. 11/25 Transition model for the umbrella world R t P ( R t | R t − 1 ) = 0 . 7 P ( R t |¬ R t − 1 ) = 0 . 3 R t + 1 R t − 2 R t − 1

  12. 12/25 Sensor model the previous and current state variables? Sensor Markov assumption: Any state is suffjcient to generate the current sensor values. How does the evidence variable E t for each time step t depend on P ( E t | X t ∧ X t − 1 ∧ · · · ∧ X 1 ∧ E t − 1 ∧ E t − 2 ∧ · · · ∧ E 1 ) = P ( E t | X t )

  13. 13/25 Complete model for the umbrella world R t U t P ( R 1 ) = 0 . 5 P ( R t | R t − 1 ) = 0 . 7 P ( U t | R t ) = 0 . 9 P ( R t |¬ R t − 1 ) = 0 . 3 P ( U t |¬ R t ) = 0 . 2 R t + 1 R t − 2 R t − 1 U t + 1 U t − 2 U t − 1

  14. 14/25 Hidden Markov Model are observable. ▶ A Markov process ▶ The state variables are unobservable. ▶ The evidence variables, which depend on the states,

  15. 15/25 Common Inference Tasks given all evidence to date. given all evidence to date. all evidence to date. most likely to have generated all the evidence to date. ▶ Filtering: the posterior distribution over the most recent state ▶ Prediction: the posterior distribution over the future state ▶ Smoothing: the posterior distribution over a past state, given ▶ Most likely explanation: fjnd the sequence of states that is

  16. 16/25 Filtering Given x t − 1 = P ( R t − 1 | U 1 ∧ · · · ∧ U t − 1 ) , how do we compute x t = P ( R t | U 1 ∧ · · · ∧ U t ) ? Examples: P ( R 1 = r 1 | U 1 ) and P ( R 2 = r 2 | U 1 ∧ U 2 )

  17. 17/25 CQ: Filtered Estimate (A) 0.518 (B) 0.618 (C) 0.718 (D) 0.818 (E) 0.918 CQ: What is P ( R 1 = t | U 1 = t ) ?

  18. 18/25 r 1 Forward recursion: Filtered Estimate P ( R 2 = r 2 | U 1 = u 1 ∧ U 2 = u 2 ) P ( R 2 = r 2 | U 1 = u 1 ∧ U 2 = u 2 ) ∑ = α P ( U 2 = u 2 | R 2 = r 2 ) P ( R 2 = r 2 | R 1 = r 1 ) P ( R 1 = r 1 | U 1 = u 1 ) ▶ From P ( R 1 = r 1 | U 1 = u 1 ) to P ( R 2 = r 2 | U 1 = u 1 ∧ U 2 = u 2 ) ▶ From P ( R 2 = r 2 | U 1 = u 1 ∧ U 2 = u 2 ) to P ( R 3 = r 3 | U 1 = u 1 ∧ U 2 = u 2 ∧ U 3 = u 3 ) ▶ ...

  19. 19/25 CQ: Filtering Which one of the following simplifjcations is valid? (D) None of (A), (B), and (C) is a valid simplifjcation. CQ: Consider P ( U 2 | R 2 ∧ U 1 ) . (A) P ( U 2 | R 2 ∧ U 1 ) = P ( U 2 | R 2 ) (B) P ( U 2 | R 2 ∧ U 1 ) = P ( U 2 | U 1 ) (C) P ( U 2 | R 2 ∧ U 1 ) = P ( U 2 )

  20. 20/25 CQ: Filtering Which one of the following simplifjcations is valid? (D) None of (A), (B), and (C) is a valid simplifjcation. CQ: Consider P ( R 2 | R 1 ∧ U 1 ) . (A) P ( R 2 | R 1 ∧ U 1 ) = P ( R 2 | R 1 ) (B) P ( R 2 | R 1 ∧ U 1 ) = P ( R 2 | U 1 ) (C) P ( R 2 | R 1 ∧ U 1 ) = P ( R 2 )

  21. 21/25 Forward Recursion for Filtering P ( R t | U 1 ∧ · · · ∧ U t ) ∑ = α P ( U t | R t ) P ( R t | R t − 1 ) P ( R t − 1 | U 1 ∧ · · · ∧ U t − 1 ) R t − 1

  22. 22/25 Prediction Forward Recursion Given P ( R t + k | U 1 ∧ · · · ∧ U t − 1 ) , how do we compute P ( R t + k + 1 | U 1 ∧ · · · ∧ U t − 1 ) ? P ( R t + k + 1 | U 1 ∧ · · · ∧ U t − 1 ) ∑ = P ( R t + k + 1 | R t + k ) P ( R t + k | U 1 ∧ · · · ∧ U t − 1 ) R t + k

  23. 23/25 Smoothing Forward Recursion Backward Recursion For 1 ≤ k < t , P ( R k | U 1 ∧ . . . U t ) = α P ( R k | U 1 ∧ · · · ∧ U k ) P ( U k + 1 ∧ · · · ∧ U t | R k ) P ( R t | U 1 ∧ · · · ∧ U t ) ∑ = α P ( U t | R t ) P ( R t | R t − 1 ) P ( R t − 1 | U 1 ∧ · · · ∧ U t − 1 ) R t − 1 P ( U k + 1 ∧ · · · ∧ U t | R k ) ∑ = P ( U k + 1 | R k + 1 ) P ( U k + 2 ∧ · · · ∧ U t | R k + 1 ) P ( R k + 1 | R k ) R k + 1

  24. 24/25 Most likely explanation

  25. 25/25 Revisiting the Learning Goals By the end of the lecture, you should be able to likely explanation given a hidden Markov model. ▶ Construct a hidden Markov model given a real-world scenario. ▶ Perform fjltering, prediction, smoothing and derive the most

Recommend


More recommend