Introduction to Mobile Robotics Bayes Filter – Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1
Bayes Filter Reminder Algorithm Bayes_filter ( Bel(x),d ): 1. 2. η = 0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then 10. For all x do 11. 12. Return Bel’(x) 2
Kalman Filter Bayes filter with Gaussians Developed in the late 1950's Most relevant Bayes filter variant in practice Applications range from economics, wheather forecasting, satellite navigation to robotics and many more. The Kalman filter "algorithm" is a bunch of matrix multiplications ! 3
Gaussians µ Univariate - σ σ µ Multivariate
Gaussians 1D 3D 2D Video
Properties of Gaussians
Multivariate Gaussians (where division "–" denotes matrix inversion) We stay Gaussian as long as we start with Gaussians and perform only linear transformations
Discrete Kalman Filter Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation with a measurement 8
Components of a Kalman Filter Matrix (nxn) that describes how the state evolves from t to t-1 without controls or noise. Matrix (nxl) that describes how the control u t changes the state from t to t-1 . Matrix (kxn) that describes how to map the state x t to an observation z t . Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Q t and R t respectively. 9
Bayes Filter Reminder Prediction Correction
Kalman Filter Updates in 1D 11
Kalman Filter Updates in 1D t + K t ( z t − C t µ t ) bel ( x t ) = µ t = µ T ( C t Σ t C t T + R t ) − 1 with K t = Σ t C t Σ t = ( I − K t C t ) Σ t How to get the blue one? –> Kalman correction step 12
Kalman Filter Updates in 1D How to get the magenta one? –> State prediction step t = A t µ t − 1 + B t u t µ bel ( x t ) = T + Q t Σ t = A t Σ t − 1 A t
Kalman Filter Updates
Linear Gaussian Systems: Initialization Initial belief is normally distributed:
Linear Gaussian Systems: Dynamics Dynamics are linear function of state and control plus additive noise: ( ) p ( x t | u t , x t − 1 ) = N x t ; A t x t − 1 + B t u t , Q t ∫ bel ( x t ) = p ( x t | u t , x t − 1 ) bel ( x t − 1 ) dx t − 1 ⇓ ⇓ ( ) ( ) ~ N x t ; A t x t − 1 + B t u t , Q t ~ N x t − 1 ; µ t − 1 , Σ t − 1
Linear Gaussian Systems: Dynamics ∫ bel ( x t ) = p ( x t | u t , x t − 1 ) bel ( x t − 1 ) dx t − 1 ⇓ ⇓ ( ) ( ) ~ N x t ; A t x t − 1 + B t u t , Q t ~ N x t − 1 ; µ t − 1 , Σ t − 1 ⇓ exp − 1 2 ( x t − A t x t − 1 − B t u t ) T Q t ∫ − 1 ( x t − A t x t − 1 − B t u t ) bel ( x t ) = η exp − 1 − 1 ( x t − 1 − µ t − 1 ) 2 ( x t − 1 − µ t − 1 ) T Σ t − 1 dx t − 1 t = A t µ t − 1 + B t u t µ bel ( x t ) = T + Q t Σ t = A t Σ t − 1 A t
Linear Gaussian Systems: Observations Observations are linear function of state plus additive noise: ( ) p ( z t | x t ) = N z t ; C t x t , R t bel ( x t ) = p ( z t | x t ) bel ( x t ) η ⇓ ⇓ ( ) ( ) ~ N z t ; C t x t , R t ~ N x t ; µ t , Σ t
Linear Gaussian Systems: Observations bel ( x t ) = p ( z t | x t ) bel ( x t ) η ⇓ ⇓ ( ) ( ) ~ N z t ; C t x t , R t ~ N x t ; µ t , Σ t ⇓ bel ( x t ) = η exp − 1 exp − 1 2 ( z t − C t x t ) T R t t ) T Σ − 1 ( z t − C t x t ) − 1 ( x t − µ 2 ( x t − µ t ) t t + K t ( z t − C t µ t ) bel ( x t ) = µ t = µ T ( C t Σ t C t T + R t ) − 1 with K t = Σ t C t Σ t = ( I − K t C t ) Σ t
Kalman Filter Algorithm Algorithm Kalman_filter ( µ t-1 , Σ t-1 , u t , z t ): 1. 2. Prediction: 3. µ t = A t µ t − 1 + B t u t T + Q t 4. Σ t = A t Σ t − 1 A t 5. Correction: T ( C t Σ t C t T + R t ) − 1 6. K t = Σ t C t 7. µ t = µ t + K t ( z t − C t µ t ) 8. Σ t = ( I − K t C t ) Σ t 9. Return µ t , Σ t
Kalman Filter Algorithm
Kalman Filter Algorithm Prediction Observation Matching Correction
The Prediction-Correction-Cycle Prediction µ t = a t µ t − 1 + b t u t bel ( x t ) = 2 = a t 2 + σ act , t 2 σ t 2 σ t t = A t µ t − 1 + B t u t µ bel ( x t ) = T + Q t Σ t = A t Σ t − 1 A t 23
The Prediction-Correction-Cycle t + K t ( z t − µ t ) 2 bel ( x t ) = µ t = µ σ , K t = t 2 = (1 − K t ) σ 2 + σ 2 2 σ t σ t t obs , t bel ( x t ) = µ t = µ t + K t ( z t − C t µ t ) T ( C t Σ t C t T + R t ) − 1 , K t = Σ t C t Σ t = ( I − K t C t ) Σ t Correction 24
The Prediction-Correction-Cycle Prediction t + K t ( z t − µ t ) 2 bel ( x t ) = µ t = µ µ t = a t µ t − 1 + b t u t σ t , K t = bel ( x t ) = 2 = (1 − K t ) σ 2 + σ 2 = a t 2 + σ act , t 2 2 σ t σ 2 σ t 2 σ t t obs , t t t = A t µ t − 1 + B t u t t + K t ( z t − C t µ t ) µ bel ( x t ) = µ t = µ T ( C t Σ t C t T + R t ) − 1 , K t = Σ t C t bel ( x t ) = T + Q t Σ t = ( I − K t C t ) Σ t Σ t = A t Σ t − 1 A t Correction 25
Kalman Filter Summary Highly efficient: Polynomial in the measurement dimensionality k and state dimensionality n : O(k 2.376 + n 2 ) Optimal for linear Gaussian systems ! Most robotics systems are nonlinear !
Nonlinear Dynamic Systems Most realistic robotic problems involve nonlinear functions
Linearity Assumption Revisited
Non-linear Function
EKF Linearization (1)
EKF Linearization (2)
EKF Linearization (3)
EKF Linearization: First Order Taylor Series Expansion Prediction: Correction:
EKF Algorithm 1. Extended_Kalman_filter ( µ t-1 , Σ t-1 , u t , z t ): 2. Prediction: 3. 4. T + Q t T + Q t Σ t = G t Σ t − 1 G t Σ t = A t Σ t − 1 A t 5. Correction: T ( H t Σ t H t T ( C t Σ t C t 6. T + R t ) − 1 T + R t ) − 1 K t = Σ t H t K t = Σ t C t 7. 8. 9. Return µ t , Σ t
Recommend
More recommend