l ecture 25
play

L ECTURE 25: B AYESIAN F ILTERS M ONTE C ARLO L OCALIZATION (PF) I - PowerPoint PPT Presentation

16-311-Q I NTRODUCTION TO R OBOTICS F ALL 17 L ECTURE 25: B AYESIAN F ILTERS M ONTE C ARLO L OCALIZATION (PF) I NSTRUCTOR : G IANNI A. D I C ARO P R O B A B I L I S T I C I N F E R E N C E Process noise Localization is an instance of the


  1. 16-311-Q I NTRODUCTION TO R OBOTICS F ALL ’17 L ECTURE 25: B AYESIAN F ILTERS M ONTE C ARLO L OCALIZATION (PF) I NSTRUCTOR : G IANNI A. D I C ARO

  2. P R O B A B I L I S T I C I N F E R E N C E Process noise Localization is an instance of the more general problem of state estimation in a noisy (feedback- based) controlled system Measurement noise • Probabilistic inference is the problem of estimating the hidden variables (states or parameters) of a system in an optimal and consistent fashion (using probability theory), given noisy or incomplete observations. For a robot typically the system evolves over time → Sequential probabilistic inference: Estimate x k given z 1:k and information about system’s dynamics and about how observations are obtained Given z, what can we infer about x? 2

  3. P R O B A B I L I S T I C I N F E R E N C E 3

  4. M A R K O V A S S U M P T I O N State is a su ffi cient statistics Static world Independent noise ) 4

  5. B AY E S I A N F I LT E R Use Bayes rule for performing Prediction and Correction updates Posterior probability is called the belief 5

  6. N O N - PA R A M E T R I C V S . G A U S S I A N F I LT E R S 6

  7. B AY E S F O R M U L A = = P ( x , y ) P ( x | y ) P ( y ) P ( y | x ) P ( x ) ⇒ ⋅ P ( y | x ) P ( x ) likelihood prior = = P ( x y ) P ( y ) evidence Normalization P ( y | x ) P ( x ) = = η Algorithm P ( x y ) P ( y | x ) P ( x ) P ( y ) 1 ∀ = x : aux P ( y | x ) P ( x ) η = − = 1 P ( y ) ∑ x | y P ( y | x ) P ( x ) x 1 η = ∑ aux x | y x ∀ = η x : P ( x | y ) aux x | y 7

  8. T O TA L P R O B A B I L I T Y A N D C O N D I T I O N I N G • Total probability: ∫ = P ( x ) P ( x , z ) dz ∫ = P ( x ) P ( x | z ) P ( z ) dz ∫ = P ( x y ) P ( x | y , z ) P ( z ) dz = P ( x , y z ) P ( x | z ) P ( y | z ) • Conditional independence equivalent to = P ( x z ) P ( x | z , y ) and = P ( y z ) P ( y | z , x ) 8

  9. B AY E S F I LT E R S • Given: • Stream of observations z and action data u: = d { u , z ! , u , z } t 1 1 t t • Sensor model P(z|x). • Action model P(x|u,x’) . • Prior probability of the system state P(x). • Wanted: • Estimate of the state X of a dynamical system. • The posterior of the state is also called Belief : = Bel ( x ) P ( x | u , z ! , u , z ) t t 1 1 t t = p ( z | x , z , u ) p ( z | x ) t 0 : t 1 : t 1 : t t t Markov assumption: = p ( x | x , z , u ) p ( x | x , u ) − − t 1 : t 1 1 : t 1 : t t t 1 t 9

  10. N E X T … Follow up on the slides from Cyrill Stachniss (check course website): • Bayes filters • Particle filter 10

Recommend


More recommend