Differentiable Bayes Filters Lecture 9
Announcement - Feedback for Project proposal latest on Wednesday night - Kevin Zakka started course notes (see Piazza) - bonus points for contributing
What will you take home today? Recap Bayesian Filters Kalman Filter Extended Kalman Filter Particle Filter Differentiable Filters Backpropagation through a Kalman Filter Backpropagation through a Particle Filter
Recap Bayes Filters u t +1 u t − 1 u t x t − 1 x t x t +1 z t − 1 z t z t +1
Kalman Filter x t = f ( x t − 1 , u t − 1 , ω t ) z t = h ( x t , ν t ) x t = f ( x t − 1 , u t − 1 , ω t ) = Ax t − 1 + Bu t + ω t z t = h ( x t , ν t ) = Hx t + ν t ω ∼ N (0 , Q t ) x ∼ N ( x , P ) ν ∼ N (0 , R t )
Kalman Filter Prediction Step: Update Step: x t = Ax t − 1 + Bu t ˆ (1) i t = z t − H ˆ (3) x t = ˆ x t + K t i t (5) x t P t = AP t − 1 A T + Q t (2) ˆ ˆ P t = ( I n − K t H )ˆ P t H T P t (6) K t = (4) H ˆ P t H T + R t
Extended Kalman Filter Prediction Step: Update Step: ˆ i t = z t − H ˆ x t = ˆ x t = Ax t − 1 + Bu t (1) (3) x t + K t i t (5) x t P t = AP t − 1 A T + Q t (2) ˆ ˆ P t = ( I n − K t H )ˆ P t H T P t (6) K t = (4) P t H T + R t H ˆ A | x t H | x t
Particle Filter p ( x t | z 1: t , u 1: t , x 0 ) t , · · · , x N X t = { x 0 t , x 1 t }
Particle Filter X 0 X t = f ( X t − 1 , u t , ω t ) p ( z t | x i t ) x i : w i t = w i t − 1 p ( z t | x i t )
When to use what? How to choose hyperparameters?
Priors and Hyperparameters A lot of hardcoded knowledge! 1. State Representation 2. Models • Forward Model • State to next state • Action to next state • Measurement Model 3. Probabilistic Properties • Process Noise • Measurement Noise
Differentiable filters Can we learn models and hyperparameters from data? Approach: Embed algorithmic structure of Bayesian Filtering into a recurrent neural network. - prevents overfitting through regularization - Avoids manual tuning and modeling
A note on variables In Robotics and Control: In Machine Learning: In Artificial Intelligence:
BackpropKF : Learning Discriminative Deterministic State Estimators. Haarnoja et al. NeurIPS 2016 - Differentiable version of the Kalman Filter - Uses Images as observations; learns a sensors that outputs state directly
Differentiable Kalman Filter - Structure
Differentiable Kalman Filter - Structure
Differentiable Kalman Filter – Loss Function L ( l 0 ...T , µ 0 ...T , Σ 0 ...T , w ) = T T 1 2(( l t � µ t ) T Σ − 1 X X t ( l t � µ t ) + log( | Σ t | )) + λ 2 k ( l t � µ t ) k 2 + λ 3 k w k 2 λ 1 t =0 t =0
Differentiable Kalman Filter – Experiments and Baselines
Differentiable Kalman Filter – Experiments and Baselines Kitti – Visual Odometry Datatset • 22 stereo sequences with LIDAR • 11 sequences with ground truth • (GPS/IMU data) • 11 sequences without ground truth (for evaluation)
Differentiable Kalman Filter – Experiments and Baselines Results reproduced by Claire Chen
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors. Jonschkowski et al. RSS 2018.
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors. Jonschkowski et al. RSS 2018.
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors. Jonschkowski et al. RSS 2018. sample n i ∼ N (0 , 1) p ∀ x i ω i Q i n i x i t = f ( x i t − 1 , u t , ω i t = t ) t − 1 ∈ X t − 1
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors. Jonschkowski et al. RSS 2018.
Particle Filter Networks with Application to Visual Localization. Karkus et al. CORL 2018.
Differentiable Particle Filter – Loss Function
Differentiable Particle Filter – Experiments and Baselines
Differentiable Particle Filter – Experiments and Baselines
Differentiable Particle Filter – Experiments and Baselines
Recommend
More recommend