moavaaon for conanuous spaces oeen no analyacal formulas
play

MoAvaAon For conAnuous spaces: oEen no analyAcal formulas for Bayes - PowerPoint PPT Presentation

Par$cle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, ProbabilisAc RoboAcs MoAvaAon For conAnuous spaces: oEen no analyAcal formulas for Bayes filter updates SoluAon 1: Histogram Filters: (not


  1. Par$cle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, ProbabilisAc RoboAcs

  2. MoAvaAon § For conAnuous spaces: oEen no analyAcal formulas for Bayes filter updates § SoluAon 1: Histogram Filters: (not studied in this course) § ParAAon the state space § Keep track of probability for each parAAon § Challenges: § What is the dynamics for the parAAoned model? § What is the measurement model? § OEen very fine resoluAon required to get reasonable results § SoluAon 2: ParAcle Filters: § Represent belief by random samples § Can use actual dynamics and measurement models § Naturally allocates computaAonal resources where required (~ adapAve resoluAon) § Aka Monte Carlo filter, Survival of the fiUest, CondensaAon, Bootstrap filter

  3. Sample-based LocalizaAon (sonar)

  4. Problem to be Solved n Given a sample-based representaAon 1 , x t 2 ,..., x t N } S t = { x t of Bel (x t ) = P(x t | z 1 , …, z t , u 1 , …, u t ) Find a sample-based representaAon 1 , x t + 1 2 ,..., x t + 1 N } S t + 1 = { x t + 1 of Bel (x t+1 ) = P(x t+1 | z 1 , …, z t , z t+1 , u 1 , …, u t+1 )

  5. Dynamics Update n Given a sample-based representaAon 1 , x t 2 ,..., x t N } S t = { x t of Bel (x t ) = P(x t | z 1 , …, z t , u 1 , …, u t ) Find a sample-based representaAon of P(x t+1 | z 1 , …, z t , u 1 , …, u t+1 ) n SoluAon: For i=1, 2, …, N n Sample x i t+1 from P(X t+1 | X t = x i t , u t+1 ) n

  6. ObservaAon Update 1 , x t + 1 2 ,..., x t + 1 N } n Given a sample-based representaAon of { x t + 1 P(x t+1 | z 1 , …, z t ) Find a sample-based representaAon of P(x t+1 | z 1 , …, z t , z t+1 ) = C * P(x t+1 | z 1 , …, z t ) * P(z t+1 | x t+1 ) n SoluAon: For i=1, 2, …, N n w (i) t+1 = w (i) t * P(z t+1 | X t+1 = x (i) t+1 ) n the distribuAon is represented by the weighted set of samples n 1 , w t + 1 2 , w t + 1 N , w t + 1 1 > , < x t + 1 2 > ,..., < x t + 1 N > } { < x t + 1

  7. SequenAal Importance Sampling (SIS) ParAcle Filter Sample x 1 1 , x 2 1 , …, x N 1 from P(X 1 ) n Set w i 1 = 1 for all i=1,…,N n For t=1, 2, … n Dynamics update: n n For i=1, 2, …, N n Sample x i t+1 from P(X t+1 | X t = x i t , u t+1 ) ObservaAon update: n n For i=1, 2, …, N n w i t+1 = w i t * P(z t+1 | X t+1 = x i t+1 ) At any Ame t, the distribuAon is represented by the weighted set of samples { < x i t , w i t > ; i=1,…,N} n

  8. SIS parAcle filter major issue n The resulAng samples are only weighted by the evidence n The samples themselves are never affected by the evidence à Fails to concentrate parAcles/computaAon in the high probability areas of the distribuAon P(x t | z 1 , …, z t )

  9. SequenAal Importance Resampling (SIR) n At any Ame t, the distribuAon is represented by the weighted set of samples { < x i t , w i t > ; i=1,…,N} à Sample N Ames from the set of parAcles à The probability of drawing each parAcle is given by its importance weight à More parAcles/computaAon focused on the parts of the state space with high probability mass

  10. SequenAal Importance Resampling (SIR) ParAcle Filter 1. Algorithm particle_filter ( S t-1 , u t , z t ): S , 0 2. = ∅ η = t i 1 … n 3. For Generate new samples = 4. Sample index j(i) from the discrete distribution given by w t-1 i x j ( i ) 5. Sample from using and x − u t p ( x t | x t − 1 , u t ) t t 1 i i w = p ( z | x ) 6. Compute importance weight t t t i w 7. η = η + Update normalization factor t i i S S { x , w } 8. Insert = ∪ < > t t t t i 1 … n 9. For = i i w = w / 10. Normalize weights η t t 11. Return S t

  11. ParAcle Filters

  12. Sensor InformaAon: Importance Sampling

  13. Robot MoAon

  14. Sensor InformaAon: Importance Sampling

  15. Robot MoAon

  16. Noise Dominated by MoAon Model [Grisetti, Stachniss, Burgard, T-RO2006] à Most particles get (near) zero weights and are lost.

  17. Importance Sampling n TheoreAcal jusAficaAon: for any funcAon f we have: n f could be: whether a grid cell is occupied or not, whether the posiAon of a robot is within 5cm of some (x,y), etc.

  18. Importance Sampling n Task: sample from density p(.) n SoluAon: n sample from “proposal density” π(.) n Weight each sample x (i) by p(x (i) ) / π(x (i) ) n E.g.: p π n Requirement: if π(x) = 0 then p(x) = 0.

  19. ParAcle Filters Revisited 1. Algorithm particle_filter ( S t-1 , u t , z t ): S , 0 2. = ∅ η = t 3. For Generate new samples i 1 … n = 4. Sample index j(i) from the discrete distribution given by w t-1 i x 5. Sample from π ( x t | x j ( i ) t − 1 , u t , z t ) t i | x t − 1 i , u t ) i ) p ( x t i = p ( z t | x t 6. Compute importance weight w t i | x t − 1 i , u t , z t ) π ( x t i w 7. Update normalization factor η = η + t i i S S { x , w } 8. Insert = ∪ < > t t t t i 1 … n = 9. For i i w = w / 10. Normalize weights η t t 11. Return S t

  20. OpAmal SequenAal Proposal π(.) n OpAmal = p ( x t | x i t − 1 , u t , z t ) π ( x t | x i t − 1 , u t , z t ) t = p ( z t | x i t ) p ( x i t | x i t − 1 , u t ) à w i π ( x i t | x i t − 1 , u t , z t ) = p ( z t | x i t ) p ( x i t | x i t − 1 , u t ) p ( x i t | x i t − 1 , u t , z t ) n Applying Bayes rule to the denominator gives: t − 1 , u t , z t ) = p ( z t | x i t , u t , x i t − 1 ) p ( x i t | x i t − 1 , u t ) p ( x i t | x i p ( z t | x i t − 1 , u t ) n SubsAtuAon and simplificaAon gives

  21. OpAmal SequenAal Proposal π(.) n OpAmal = p ( x t | x i t − 1 , u t , z t ) π ( x t | x i t − 1 , u t , z t ) à n Challenges: n Typically difficult to sample from p ( x t | x i t − 1 , u t , z t ) n Importance weight: typically expensive to compute integral

  22. Example 1: π(.) = OpAmal Proposal Nonlinear Gaussian State Space Model n Nonlinear Gaussian State Space Model: n Then: with n And:

  23. Example 2: π(.) = MoAon Model à the “standard” parAcle filter

  24. Example 3: ApproximaAng OpAmal π for LocalizaAon [Grisetti, Stachniss, Burgard, T-RO2006] n One (not so desirable solution): use smoothed likelihood such that more particles retain a meaningful weight --- BUT information is lost n Better: integrate latest observation z into proposal π

  25. Example 3: ApproximaAng OpAmal π for LocalizaAon: GeneraAng One Weighted Sample Build Gaussian Approximation to Optimal Sequential Proposal IniAal guess 1. Execute scan matching starAng from the iniAal guess , resulAng in pose esAmate . 2. Sample K points in region around . 3. Proposal distribuAon is Gaussian 4. with mean and covariance: 5. Sample from (approximately opAmal) sequenAal proposal distribuAon. 6. Weight = Z x 0 p ( z t | x 0 , m ) p ( x 0 | x i t � 1 , u t ) dx 0 ≈ η i

  26. Example 3: Example ParAcle DistribuAons [Grisetti, Stachniss, Burgard, T-RO2006] Particles generated from the approximately optimal proposal distribution. If using the standard motion model, in all three cases the particle set would have been similar to (c).

  27. Resampling Consider running a parAcle filter for a system with determinisAc dynamics n and no sensors Problem: n n While no informaAon is obtained that favors one parAcle over another, due to resampling some parAcles will disappear and aEer running sufficiently long with very high probability all parAcles will have become idenAcal. n On the surface it might look like the parAcle filter has uniquely determined the state. Resampling induces loss of diversity. The variance of the parAcles n decreases, the variance of the parAcle set as an esAmator of the true belief increases.

  28. Resampling SoluAon I n EffecAve sample size: Normalized weights n Example: n All weights = 1/N à EffecAve sample size = N n All weights = 0, except for one weight = 1 à EffecAve sample size = 1 n Idea: resample only when effecAve sampling size is low

  29. Resampling SoluAon I (ctd)

  30. Resampling SoluAon II: Low Variance Sampling M = number of parAcles n n r in [0, 1/M] Advantages: n n More systemaAc coverage of space of samples n If all samples have same importance weight, no samples are lost n Lower computaAonal complexity

  31. Resampling SoluAon III n Loss of diversity caused by resampling from a discrete distribuAon n SoluAon: “regularizaAon” n Consider the parAcles to represent a conAnuous density n Sample from the conAnuous density n E.g., given (1-D) parAcles sample from the density:

Recommend


More recommend