Introduction to Mobile Robotics Bayes Filter – Extended Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1
Bayes Filter Reminder ∫ bel ( x t ) = η p ( z t | x t ) p ( x t | u t , x t − 1 ) bel ( x t − 1 ) dx t − 1 § Prediction ∫ bel ( x t ) = p ( x t | u t , x t − 1 ) bel ( x t − 1 ) dx t − 1 § Correction bel ( x ) p ( z | x ) bel ( x ) = η t t t t
Discrete Kalman Filter Estimates the state x of a discrete-time controlled process x A x B u = + + ε t t t − 1 t t t with a measurement z C x = + δ t t t t 3
Components of a Kalman Filter Matrix (nxn) that describes how the state A t evolves from t -1 to t without controls or noise. Matrix (nxl) that describes how the control u t B t changes the state from t -1 to t. Matrix (kxn) that describes how to map the C t state x t to an observation z t . Random variables representing the process ε t and measurement noise that are assumed to be independent and normally distributed δ t with covariance Q t and R t respectively. 4
Kalman Filter Update Example prediction measurement correction It's a weighted mean! 5
Kalman Filter Update Example prediction correction measurement
Kalman Filter Algorithm Algorithm Kalman_filter ( µ t-1 , Σ t-1 , u t , z t ): 1. 2. Prediction: 3. µ t = A t µ t − 1 + B t u t T + Q t 4. Σ t = A t Σ t − 1 A t 5. Correction: T ( C t Σ t C t T + R t ) − 1 6. K t = Σ t C t 7. µ t = µ t + K t ( z t − C t µ t ) 8. Σ t = ( I − K t C t ) Σ t 9. Return µ t , Σ t
Nonlinear Dynamic Systems § Most realistic robotic problems involve nonlinear functions x A x B u z C x = + + ε = + δ t t t t t t t − 1 t t t x g ( u , x ) z = h ( x ) = t t t 1 − t t
Linearity Assumption Revisited
Non-Linear Function Non-Gaussian!
Non-Gaussian Distributions § The non-linear functions lead to non- Gaussian distributions § Kalman filter is not applicable anymore! What can be done to resolve this?
Non-Gaussian Distributions § The non-linear functions lead to non- Gaussian distributions § Kalman filter is not applicable anymore! What can be done to resolve this? Local linearization!
EKF Linearization: First Order Taylor Expansion § Prediction: g ( u , ) ∂ µ g ( u , x ) g ( u , ) t t 1 ( x ) − ≈ µ + − µ t t 1 t t 1 t 1 t 1 − − − − x ∂ t 1 − g ( u , x ) g ( u , ) G ( x ) ≈ µ + − µ t t 1 t t 1 t t 1 t 1 − − − − § Correction: Jacobi matrices h ( ) ∂ µ h ( x ) h ( ) t ( x ) ≈ µ + − µ t t t t x ∂ t h ( x ) h ( ) H ( x ) ≈ µ + − µ t t t t t
Reminder: Jacobian Matrix § It i s a non-square matrix in general § Given a vector-valued function § The Jacobian matrix is defined as
Reminder: Jacobian Matrix § It is the orientation of the tangent plane to the vector-valued function at a given point § Generalizes the gradient of a scalar valued function
EKF Linearization: First Order Taylor Expansion § Prediction: g ( u , ) ∂ µ g ( u , x ) g ( u , ) t t 1 ( x ) − ≈ µ + − µ t t 1 t t 1 t 1 t 1 − − − − x ∂ t 1 − g ( u , x ) g ( u , ) G ( x ) ≈ µ + − µ t t 1 t t 1 t t 1 t 1 − − − − § Correction: Linear function! h ( ) ∂ µ h ( x ) h ( ) t ( x ) ≈ µ + − µ t t t t x ∂ t h ( x ) h ( ) H ( x ) ≈ µ + − µ t t t t t
Linearity Assumption Revisited
Non-Linear Function
EKF Linearization (1)
EKF Linearization (2)
EKF Linearization (3)
EKF Algorithm 1. Extended_Kalman_filter ( µ t-1 , Σ t-1 , u t , z t ): 2. Prediction: 3. g ( u , ) A B u µ = µ µ = µ + t t t 1 t t t − 1 t t − 4. T + Q t T + Q t Σ t = G t Σ t − 1 G t Σ t = A t Σ t − 1 A t 5. Correction: 6. T ( H t Σ t H t T ( C t Σ t C t T + R t ) − 1 T + R t ) − 1 K t = Σ t H t K t = Σ t C t 7. K ( z h ( )) K ( z C ) µ = µ + − µ µ = µ + − µ t t t t t t t t t t t 8. ( I K H ) ( I K C ) Σ = − Σ Σ = − Σ t t t t t t t t 9. Return µ t , Σ t g ( u , 1 ) ∂ µ h ( µ ) ∂ G t t − H t = = t x t x ∂ ∂ t 1 − t
Example: EKF Localization § EKF localization with landmarks (point features)
1. EKF_localization ( µ t-1 , Σ t-1 , u t , z t , m ): Prediction: # & ∂ x ' ∂ x ' ∂ x ' % ( % ( ∂ µ t − 1, x ∂ µ t − 1, y ∂ µ t − 1, θ % ( ∂ g ( u t , µ t − 1 ) ∂ y ' ∂ y ' ∂ y ' % ( 2. Jacobian of g w.r.t location G t = = % ( ∂ x t − 1 ∂ µ t − 1, x ∂ µ t − 1, y ∂ µ t − 1, θ % ( ∂ θ ' ∂ θ ' ∂ θ ' % ( % ( % ( ∂ µ t − 1, x ∂ µ t − 1, y ∂ µ t − 1, θ $ ' # & ∂ x ' ∂ x ' % ( % ( ∂ v t ∂ ω t % ( % ( Jacobian of g w.r.t control 3. ∂ g ( u t , µ t − 1 ) ∂ y ' ∂ y ' % ( V t = = ∂ u t % ∂ v t ∂ ω t ( % ( % ( ∂ θ ' ∂ θ ' % ( ∂ v t % ∂ ω t ( $ ' $ ' Motion noise 2 4. ( ) α 1 | v t | + α 2 | ω t | 0 Q t = & ) & ) 2 ( ) 0 α 3 | v t | + α 4 | ω t | % ( g ( u , ) Predicted mean 5. µ = µ t t t 1 − T + V t Q t V t Predicted covariance (V 6. T Σ t = G t Σ t − 1 G t maps Q into state space)
1. EKF_localization ( µ t-1 , Σ t-1 , u t , z t , m ): Correction: ) ( ) ( & 2 2 # m m − µ + − µ 2. Predicted measurement mean $ ! z ˆ x t , x y t , y = t ( ) $ ! atan 2 m , m (depends on observation type) − µ − µ − µ % " y t , y x t , x t , θ r r r ∂ ∂ ∂ & # t t t $ ! h ( , m ) Jacobian of h w.r.t location ∂ µ ∂ µ ∂ µ ∂ µ 3. $ ! t , x t , y t , H t θ = = t $ ! x ∂ ϕ ∂ ϕ ∂ ϕ ∂ t t t t $ ! ∂ µ ∂ µ ∂ µ % t , x t , y t , " θ # & 2 0 R t = σ r 4. % ( 2 0 σ r $ ' T + R t Innovation covariance 5. S t = H t Σ t H t Kalman gain T 1 6. K H S − = Σ t t t t ˆ 7. K ( z z ) Updated mean µ = µ + − t t t t t 8. ( ) t Updated covariance I K H Σ = − Σ t t t
EKF Prediction Step Examples T T V t Q t V t V t Q t V t T T V t Q t V t V t Q t V t
EKF Observation Prediction Step R t R t
EKF Correction Step
Estimation Sequence (1)
Estimation Sequence (2)
Comparison to GroundTruth
Extended Kalman Filter Summary § Ad-hoc solution to deal with non-linearities § Performs local linearizations in each step § Works well in practice for moderate non- linearities § Example: landmark localization § There exists better ways for dealing with non-linaerities such as the unscented Kalman filter called UKF 32
Recommend
More recommend