dr abubakr muhammad
play

Dr Abubakr - PowerPoint PPT Presentation

EE-5 6 2 : Robot Motion Planning Simultaneous Localization & Mapping (SLAM) Dr Abubakr Muhammad Assistant Professor Electrical Engineering, LUMS


  1. Recursive Bayesian Updating   P ( z | x , z , , z ) P ( x | z , , z ) − − n 1 n 1 1 n 1 =  P ( x | z , , z ) 1 n  P ( z | z , , z ) − n 1 n 1 Markov assumption: z n is independent of z 1 ,...,z n-1 if we know x.  P ( z | x ) P ( x | z , , z ) − n 1 n 1 =  P ( x | z , , z ) 1 n  P ( z | z , , z ) − n 1 n 1 = η  P ( z | x ) P ( x | z , , z ) − n 1 n 1 ∏ = η [ P ( z | x ) ] P ( x ) i 1 ... n = i 1 ... n 9-49

  2. Exam ple: Second Measurem ent P(z 2 | ¬ open) = 0.6 • P(z 2 |open) = 0.5 • P(open|z 1 )=2/3 P ( z | open ) P ( open | z ) = 2 1 P ( open | z , z ) + ¬ ¬ 2 1 P ( z | open ) P ( open | z ) P ( z | open ) P ( open | z ) 2 1 2 1 1 2 ⋅ 5 2 3 = = = 0 . 625 1 2 3 1 8 ⋅ + ⋅ 2 3 5 3 z 2 lowers the probability that the door is open. 9-50

  3. Actions • Often the world is dynam ic since • actions carried out by the robot , • actions carried out by other agents , • or just the tim e passing by change the world. • How can we incorporate such actions ? 9-51

  4. Typical Actions • The robot turns its w heels to move • The robot uses its m anipulator to grasp an object • Plants grow over tim e … • Actions are never carried out w ith absolute certainty . • In contrast to measurements, actions generally increase the uncertainty . 9-52

  5. Modeling Actions • To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P( x| u,x’) • This term specifies the pdf that executing u changes the state from x’ to x . 9-53

  6. Exam ple: Closing the door 9-54

  7. State Transitions P(x| u,x’) for u = “close door”: 0.9 0.1 open closed 1 0 If the door is open, the action “close door” succeeds in 90% of all cases. 9-55

  8. I ntegrating the Outcom e of Actions Continuous case: ∫ = P ( x | u ) P ( x | u , x ' ) P ( x ' ) dx ' Discrete case: ∑ = P ( x | u ) P ( x | u , x ' ) P ( x ' ) 9-56

  9. Exam ple: The Resulting Belief ∑ = P ( closed | u ) P ( closed | u , x ' ) P ( x ' ) = P ( closed | u , open ) P ( open ) + P ( closed | u , closed ) P ( closed ) 9 5 1 3 15 = ∗ + ∗ = 10 8 1 8 16 ∑ = P ( open | u ) P ( open | u , x ' ) P ( x ' ) = P ( open | u , open ) P ( open ) + P ( open | u , closed ) P ( closed ) 1 5 0 3 1 = ∗ + ∗ = 10 8 1 8 16 = − 1 P ( closed | u ) 9-57

  10. Bayes Filters: Fram ew ork • Given: • Stream of observations z and action data u:  { , u z , u z , } 1 1 t t • Sensor model P(z| x). • Action model P(x| u,x’) . • Prior probability of the system state P(x). • W anted: • Estimate of the state X of a dynamical system. • The posterior of the state is also called Belief : =  Bel ( x ) P ( x | u , z , u , z ) t t 1 1 t t 9-58

  11. Bayes Filter Exam ple 9-59

  12. Dynam ic Bayesian Netw ork for Controls, States, and Sensations 9-60

  13. Markov Assum ption = p ( z | x , z , u ) p ( z | x ) t 0 : t 1 : t 1 : t t t = p ( x | x , z , u ) p ( x | x , u ) − − t 1 : t 1 1 : t 1 : t t t 1 t Underlying Assumptions • Static world • Independent noise • Perfect model, no approximation errors 9-61

  14. z = observation u = action Bayes Filters x = state =  Bel ( x ) P ( x | u , z , u , z ) t t 1 1 t t = η   P ( z | x , u , z , , u ) P ( x | u , z , , u ) Bayes t t 1 1 t t 1 1 t = η  P ( z | x ) P ( x | u , z , , u ) Markov t t t 1 1 t ∫ = η  P ( z | x ) P ( x | u , z , , u , x ) Total prob. − t t t 1 1 t t 1  P ( x | u , z , , u ) dx − − t 1 1 1 t t 1 ∫ = η  P ( z | x ) P ( x | u , x ) P ( x | u , z , , u ) dx Markov − − − t t t t t 1 t 1 1 1 t t 1 ∫ = η  P ( z | x ) P ( x | u , x ) P ( x | u , z , , z ) dx Markov − − − − t t t t t 1 t 1 1 1 t 1 t 1 ∫ = η P ( z | x ) P ( x | u , x ) Bel ( x ) dx − − − t t t t t 1 t 1 t 1 9-62

  15. ∫ = η Bayes Filter Algorithm Bel ( x ) P ( z | x ) P ( x | u , x ) Bel ( x ) dx − − − t t t t t t 1 t 1 t 1 Algorithm Bayes_ filter ( Bel(x),d ): 1. η = 0 2. 3. If d is a perceptual data item z then 4. For all x do = Bel ' ( x ) P ( z | x ) Bel ( x ) 5. η = η + Bel ' x ( ) 6. 7. For all x do = η − 1 Bel ' ( x ) Bel ' ( x ) 8. 9. Else if d is an action data item u then 10. For all x do ∫ = Bel ' ( x ) P ( x | u , x ' ) Bel ( x ' ) dx ' 11. 12. Return Bel’(x) 9-63

  16. Bayes Filters are Fam iliar! ∫ = η Bel ( x ) P ( z | x ) P ( x | u , x ) Bel ( x ) dx − − − t t t t t t 1 t 1 t 1 • Kalman filters • Particle filters • Hidden Markov models • Dynamic Bayesian networks • Partially Observable Markov Decision Processes (POMDPs) 9-64

  17. Bayes Filters in Localization ∫ = η Bel ( x ) P ( z | x ) P ( x | u , x ) Bel ( x ) dx − − − t t t t t t 1 t 1 t 1 9-65

  18. Sum m ary • Bayes rule allows us to compute probabilities that are hard to assess otherwise. • Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. • Bayes filters are a probabilistic tool for estimating the state of dynamic systems. 9-66

  19. Part 5. PARTI CLE FI LTERI NG 9-67

  20. Bayes Filters in Localization ∫ = η Bel ( x ) P ( z | x ) P ( x | u , x ) Bel ( x ) dx − − − t t t t t t 1 t 1 t 1 9-68

  21. Histogram = Piecewise Constant 6 9-69 9

  22. Piecew ise Constant Representation =< θ > Bel ( x x , y , ) t 9-70

  23. Discrete Bayes Filter Algorithm Algorithm Discrete_ Bayes_ filter ( Bel(x),d ): 1. η = 0 2. 3. If d is a perceptual data item z then 4. For all x do = Bel ' ( x ) P ( z | x ) Bel ( x ) 5. η = η + Bel ' x ( ) 6. 7. For all x do = η − 1 Bel ' ( x ) Bel ' ( x ) 8. 9. Else if d is an action data item u then 10. For all x do ∑ = Bel ' ( x ) P ( x | u , x ' ) Bel ( x ' ) 11. x ' 12. Return Bel’(x) 9-71

  24. I m plem entation ( 1 ) • To update the belief upon sensory input and to carry out the normalization one has to iterate over all cells of the grid. • Especially when the belief is peaked (which is generally the case during position tracking), one wants to avoid updating irrelevant aspects of the state space. • One approach is not to update entire sub-spaces of the state space. • This, however, requires to monitor whether the robot is de-localized or not. • To achieve this, one can consider the likelihood of the observations given the active components of the state space. 9-72

  25. I m plem entation ( 2 ) • To efficiently update the belief upon robot motions, one typically assumes a bounded Gaussian model for the motion uncertainty. • This reduces the update cost from O(n 2 ) to O(n) , where n is the number of states. • The update can also be realized by shifting the data in the grid according to the measured motion. • In a second step, the grid is then convolved using a separable Gaussian Kernel. • Two-dimensional example: 1/ 16 1/ 8 1/ 16 1/ 4 ≅ + 1/ 8 1/ 4 1/ 8 1/ 2 1/ 4 1/ 2 1/ 4 1/ 16 1/ 8 1/ 16 1/ 4 Fewer arithmetic operations Easier to implement 9-73

  26. Markov Localization in Grid Map 9-74

  27. Grid-based Localization 9-75

  28. Mathem atical Description  Set of weighted samples State hypothesis Importance weight  The samples represent the posterior 9-76

  29. Function Approxim ation  Particle sets can be used to approximate functions  The more particles fall into an interval, the higher the probability of that interval  How to draw samples form a function/ distribution? 9-77

  30. Rejection Sam pling  Let us assume that f(x)< 1 for all x  Sample x from a uniform distribution  Sample c from [ 0,1]  if f(x) > c keep the sample otherwise reject the sampe f(x’) c c’ OK f(x) x x’ 9-78

  31. I m portance Sam pling Principle  We can even use a different distribution g to generate samples from f  By introducing an importance weight w , we can account for the “differences between g and f ”  w = f / g  f is often called target  g is often called proposal  Pre-condition: f(x)>0  g(x)>0 9-79

  32. I m portance Sam pling w ith Resam pling: Landm ark Detection Exam ple 9-80

  33. Distributions 9-81

  34. Distributions Wanted: samples distributed according to p(x| z 1 , z 2 , z 3 ) 9-82

  35. This is Easy! We can draw samples from p(x| z l ) by adding noise to the detection parameters. 9-83

  36. I m portance Sam pling ∏ p ( z | x ) p ( x ) k = k Target distributi on f : p ( x | z , z ,..., z ) 1 2 n p ( z , z ,..., z ) 1 2 n p ( z | x ) p ( x ) = l Sampling distributi on g : p ( x | z ) l p ( z ) l ∏ p ( z ) p ( z | x ) l k f p ( x | z , z ,..., z ) = = ≠ 1 2 n k l Importance weights w : g p ( x | z ) p ( z , z ,..., z ) l 1 2 n 9-84

  37. I m portance Sam pling w ith Resam pling Weighted samples After resampling 9-85

  38. Particle Filters 9-86

  39. Sensor I nform ation: I m portance Sam pling − ← α Bel ( x ) p ( z | x ) Bel ( x ) − α p ( z | x ) Bel ( x ) ← = α w p ( z | x ) − Bel ( x ) 9-87

  40. Robot Motion ∫ − ← Bel ( x ) p ( x | u x ' ) Bel ( x ' ) d x ' , 9-88

  41. Sensor I nform ation: I m portance Sam pling − ← α Bel ( x ) p ( z | x ) Bel ( x ) − α p ( z | x ) Bel ( x ) ← = α w p ( z | x ) − Bel ( x ) 9-89

  42. Robot Motion ∫ − ← Bel ( x ) p ( x | u x ' ) Bel ( x ' ) d x ' , 9-90

  43. Particle Filter Algorithm  Sample the next generation for particles using the proposal distribution  Compute the importance weights : weight = target distribution / proposal distribution  Resampling: “Replace unlikely samples by more likely ones” 9-91

  44. Particle Filter Algorithm = ∅ η = S , 0 t =  i 1 n i j ( i ) p ( x | x , u ) u x x − − − − t t 1 t 1 t 1 t t 1 w = i i p ( z | x ) t t t η = η + i w t = ∪ < > i i S S { x , w } t t t t =  i 1 n w = η i i w / t t 9-92

  45. Particle Filter Algorithm ∫ = η Bel ( x ) p ( z | x ) p ( x | x , u ) Bel ( x ) dx − − − − t t t t t 1 t 1 t 1 t 1 draw x i t − 1 from Bel (x t − 1 ) draw x i t from p ( x t | x i t − 1 , u t − 1 ) Importance factor for x i t : target distributi on = i w t proposal distributi on η p ( z | x ) p ( x | x , u ) Bel ( x ) = − − − t t t t 1 t 1 t 1 p ( x | x , u ) Bel ( x ) − − − t t 1 t 1 t 1 ∝ p ( z | x ) t t 9-93

  46. Resam pling  Given : Set S of weighted samples.  W anted : Random sample, where the probability of drawing x i is given by w i .  Typically done n times with replacement to generate new sample set S’ . 9-94

  47. Resam pling w 1 w n w 1 w n w 2 w 2 W n-1 W n-1 w 3 w 3 Stochastic universal sampling Roulette wheel Systematic resampling Binary search, n log n Linear time complexity Easy to implement, low variance 9-95

  48. Resam pling Algorithm 1. Algorithm systematic_resampling( S,n ): = ∅ = 1 2. S ' , c w 1 = 3. For  Generate cdf i 2 n = + i c c w 4. − 1 i i − = 1 5. u ~ U ] 0 , n ], i 1 Initialize threshold 1 =  6. For j 1 n Draw samples … u > c 7. While ( ) Skip until next threshold reached j i = i + 8. i 1 { } = ∪ < − 1 > i 9. S ' S ' x , n Insert = + − 1 u u n 10. Increment threshold + j 1 j 11. Return S’ Also called stochastic universal sampling 9-96

  49. Mobile Robot Localization  Each particle is a potential pose of the robot  Proposal distribution is the motion model of the robot (prediction step)  The observation model is used to compute the importance weight (correction step) 9-97

  50. Motion Model Start 9-98

  51. Proxim ity Sensor Model Sonar sensor Laser sensor 9-99

  52. 9-100

Recommend


More recommend