Reading Assignment • Chapter 4 of PR The Kalman Filter – Focus on histogram and particle filters (part 2) Homework 1 Something fun • See canvas – will preview at end of class
Administrative Stuff Rudolf Emil Kalman [http://www.cs.unc.edu/~welch/kalman/kalmanBiblio.html] Definition Definition • A Kalman filter is simply an optimal “The Kalman filter incorporates all recursive data processing algorithm information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of • Under some assumptions the Kalman filter interest.” is optimal with respect to virtually any criterion that makes sense. [Maybeck (1979)]
Why do we need a filter? Application: Radar Tracking • No mathematical model of a real system is perfect • Real world disturbances • Imperfect Sensors Application: Lunar Landing https://github.com/chrislgarry/Apollo-11
Application: Missile Tracking Application: Sailing Application: Robot Navigation Application: Other Tracking
Application: Head Tracking Face & Hand Tracking A Simple Recursive Example First Approach • Problem Statement: 1. Make the first measurement z 1 Store z 1 and estimate the mean as µ 1 =z 1 Given the measurement sequence: 2. Make the second measurement z 2 z 1 , z 2 , …, z n find the mean Store z 1 along with z 2 and estimate the mean as µ 2 = (z 1 +z 2 )/2 [Brown and Hwang (1992)] [Brown and Hwang (1992)]
First Approach (cont’d) First Approach (cont’d) 3. Make the third measurement z 3 n. Make the n-th measurement z n Store z 3 along with z 1 and z 2 and Store z n along with z 1 , z 2 ,…, z n-1 and estimate the mean as estimate the mean as µ n = (z 1 + z 2 + … + z n )/n µ 3 = (z 1 +z 2 +z 3 )/3 [Brown and Hwang (1992)] [Brown and Hwang (1992)] Second Approach Second Approach (cont’d) 1. Make the first measurement z 1 2. Make the second measurement z 2 Compute the mean estimate as Compute the estimate of the mean as a weighted sum of the previous estimate µ 1 and the current measurement z 2: µ 1 =z 1 µ 2 = 1/2 µ 1 +1/2 z 2 Store µ 1 and discard z 1 Store µ 2 and discard z 2 and µ 1 [Brown and Hwang (1992)] [Brown and Hwang (1992)]
Second Approach (cont’d) Second Approach (cont’d) 3. Make the third measurement z 3 n. Make the n-th measurement z n Compute the estimate of the mean as a Compute the estimate of the mean as a weighted sum of the previous estimate µ 2 weighted sum of the previous estimate µ n-1 and the current measurement z n: and the current measurement z 3: µ n = (n-1)/n µ n-1 +1/n z n µ 3 = 2/3 µ 2 +1/3 z 3 Store µ n and discard z n and µ n-1 Store µ 3 and discard z 3 and µ 2 [Brown and Hwang (1992)] [Brown and Hwang (1992)] Comparison Analysis • The second procedure gives the same result as the first procedure. • It uses the result for the previous step to help obtain an estimate at the current step. • The difference is that it does not need to keep the sequence in memory. Batch Method Recursive Method [Brown and Hwang (1992)]
Second Approach Second Approach (rewrite the general formula) (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = (n-1)/n µ n-1 +1/n z n µ n = (n/n) µ n-1 - (1/n) µ n-1 +1/n z n µ n = (n/n) µ n-1 - (1/n) µ n-1 +1/n z n µ n = µ n-1 + 1/n (z n - µ n-1 ) Difference Gain Old Between Factor Estimate New Reading and Old Estimate Second Approach (rewrite the general formula) Gaussian Properties Difference Gain Old Between Factor Estimate New Reading and Old Estimate
The Gaussian Function Gaussian pdf Properties pdf for • If and • Then
Properties Summation and Subtraction Conditional density of position based on measured value of z 1 A simple example using diagrams [Maybeck (1979)]
Conditional density of position Conditional density of position based on measured value of z 1 based on measurement of z 2 alone uncertainty position measured position [Maybeck (1979)] [Maybeck (1979)] Conditional density of position Conditional density of position based on measurement of z 2 alone based on data z 1 and z 2 uncertainty estimate uncertainty 2 position estimate measured position 2 [Maybeck (1979)] [Maybeck (1979)]
Propagation of the conditional density Propagation of the conditional density movement vector expected position just prior to taking measurement 3 [Maybeck (1979)] [Maybeck (1979)] Propagation of the conditional density Propagation of the conditional density uncertainty 3 movement vector σ x (t 3 ) z 3 expected position just prior measured position 3 to taking measurement 3 [Maybeck (1979)]
Updating the conditional density after the third measurement position uncertainty σ x (t 3 ) x(t3) z 3 position estimate Questions? Now let’s do the same thing …but this time we’ll use math
How should we combine the two Calculating the new mean measurements? Scaling Factor 1 Scaling Factor 2 σ Z2 σ Z1 [Maybeck (1979)] Calculating the new mean Calculating the new mean Scaling Factor 1 Scaling Factor 1 Scaling Factor 2 Scaling Factor 2 Why is this not z 1 ?
Calculating the new variance Calculating the new variance Scaling Factor 1 Scaling Factor 2 σ Z2 σ Z1 [Maybeck (1979)] Remember the Gaussian Remember the Gaussian Properties? Properties? • If and • Then This is a 2 not a
The scaling factors must be squared! The scaling factors must be squared! Scaling Factor 1 Scaling Factor 1 Scaling Factor 2 Scaling Factor 2 Another Way to Express Therefore the new variance is The New Position Try to derive this on your own. [Maybeck (1979)]
Another Way to Express Another Way to Express The New Position The New Position [Maybeck (1979)] [Maybeck (1979)] The equation for the variance can Adding Movement also be rewritten as [Maybeck (1979)] [Maybeck (1979)]
Adding Movement Adding Movement [Maybeck (1979)] [Maybeck (1979)] Properties of K • If the measurement noise is large K is small The Kalman Filter (part 2) 0 [Maybeck (1979)]
Example Applications https://www.youtube.com/watch?v=MxwVwCuBEDA https://github.com/pabsaura/Prediction-of-Trajectory-with-kalman-filter-and-open-cv Another Example https://www.youtube.com/watch?v=sG-h5ONsj9s https://www.myzhar.com/blog/tutorials/tutorial-opencv-ball-tracker-using-kalman-filter/
[www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt] [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt] A Simple Example A Simple Example (cont ’ d) • Consider a ship sailing east with a perfect compass • Along comes a more experienced navigator, and trying to estimate its position. she takes her own sighting z 2 • You estimate the position x from the stars as • She estimates the position x= z 2 =125 with a z 1 =100 with a precision of σ x =4 miles precision of σ x =3 miles • How do you merge her estimate with your own? 100 x x 100 125 [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt] [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt] A Simple Example (cont ’ d) A Simple Example (cont ’ d) 2 ] 2 ] μ = [ z 1 + [ • With the distributions being Gaussian, the best σ 2 = 1 1 2 + 1 2 2 σ z 2 σ z 1 estimate for the state is the mean of the z 2 2 σ z 1 σ z 2 2 + σ z 2 2 + σ z 2 σ z 1 σ z 1 distribution, so… 1 σ 2 = 1 9 + 1 16 = 25 2 ] 2 ] x 2 = [ z 1 + [ = [ 16 + 9 ] 100 + [ 16 + 9 ] 125 = 116 144 2 2 9 16 σ z 2 σ z 1 z 2 ⇒ σ = 2.4 2 + σ z 2 2 + σ z 2 σ z 1 σ z 1 2 ] or alternately = z 1 + [ 2 σ z 1 ( z 2 − z 1 ) Correction Term 2 + σ z 2 σ z 1 = z 1 + K 2 ( z 2 − z 1 ) where K t is referred to as the Kalman gain , and must be computed at each time step x x 2 =116
Recommend
More recommend