multiple sensor target tracking basic idea
play

Multiple Sensor Target Tracking: Basic Idea Iterative updating of - PowerPoint PPT Presentation

Multiple Sensor Target Tracking: Basic Idea Iterative updating of conditional probability densities! kinematic target state x k at time t k , accumulated sensor data Z k a priori knowledge: target dynamics models, sensor model dynamics model p ( x


  1. Multiple Sensor Target Tracking: Basic Idea Iterative updating of conditional probability densities! kinematic target state x k at time t k , accumulated sensor data Z k a priori knowledge: target dynamics models, sensor model dynamics model p ( x k − 1 |Z k − 1 ) p ( x k |Z k − 1 ) • prediction: − − − − − − − − − − → sensor data Z k p ( x k |Z k − 1 ) p ( x k |Z k ) • filtering: − − − − − − − − − − → sensor model filtering output p ( x l − 1 |Z k ) p ( x l |Z k ) • retrodiction: ← − − − − − − − − − − dynamics model Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 1

  2. The Multivariate G AUSS ian Pdf – wanted: probabilities ‘concentrated’ around a center x q ( x ) = 1 2 ( x − x ) P − 1 ( x − x ) ⊤ – quadratic distance: q ( x ) defines an ellipsoid around x , its volume and orien- tation being determined by a matrix P (symmetric: P ⊤ = P , positively definite: all eigenvalues > 0 ). � dx e − q ( x ) (normalized!) p ( x ) = e − q ( x ) / – first attempt: e − 1 1 2( x − x ) ⊤ P − 1 ( x − x ) p ( x ) = N ( x ; x , P ) = � | 2 π P | p ( x ) = � i p i N ( x ; x i , P i ) (weighted sums) – G AUSS ian Mixtures: Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 2

  3. Very First Look at an Important Data Fusion Algorithm k ) ⊤ , Z k = { z k , Z k − 1 } Kalman filter: x k = ( r ⊤ r ⊤ k , ˙ p ( x 0 ) = N � x 0 ; x 0 | 0 , P 0 | 0 � initiation: , initial ignorance: P 0 | 0 ‘large’ N � x k − 1 ; x k − 1 | k − 1 , P k − 1 | k − 1 � dynamics model N � x k ; x k | k − 1 , P k | k − 1 � prediction: − − − − − − − − − → F k | k − 1 , D k | k − 1 x k | k − 1 = F k | k − 1 x k − 1 | k − 1 ⊤ + D k | k − 1 P k | k − 1 = F k | k − 1 P k − 1 | k − 1 F k | k − 1 � � � � current measurement z k N − − − − − − − − − − − − − → N filtering: x k ; x k | k − 1 , P k | k − 1 x k ; x k | k , P k | k sensor model: H k , R k = x k | k − 1 + W k | k − 1 ν k | k − 1 , ν k | k − 1 = z k − H k x k | k − 1 x k | k S k | k − 1 = H k P k | k − 1 H k ⊤ + R k P k | k − 1 − W k | k − 1 S k | k − 1 W k | k − 1 ⊤ , = P k | k W k | k − 1 = P k | k − 1 H k ⊤ S k | k − 1 − 1 ‘K ALMAN gain matrix’ A deeper look into the dynamics and sensor models necessary! Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 3

  4. Create your own ground truth generator! Consider a car moving on a mountain pass road modeled by: � � � � vt x ( t ) a y sin( 4 πv Exercise 3.1 ax t ) r ( t ) = = A y ( t ) a z sin( πv ax ) z ( t ) v = 20 km h , a y = a z = 1 km, t ∈ [0 , a x /v x ] . 1. Plot the trajectory. Are the parameters reasonable? Try alternatives. 2. Calculate and plot the velocity and acceleration vectors: � � � � x ( t ) ˙ x ( t ) ¨ r ( t ) = ˙ , ¨ r ( t ) = − q . y ( t ) ˙ y ( t ) ¨ z ( t ) ˙ z ( t ) ¨ 3. Calculate for each instance of time t the tangential vectors in r ( t ) : 1 t ( t ) = r ( t ) | ˙ r ( t ) . | ˙ 4. Plot | ˙ r ( t ) | , | ¨ r ( t ) | , and ¨ r ( t ) t ( t ) over the time interval. 5. Discuss the temporal behaviour based on the trajectory r ( t ) ! Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 4

  5. How to deal with probability density functions? • pdf p ( x ) : Extract probability statements about the RV x by integration! � dx p ( x ) = 1 ) • na¨ ıvely: positive and normalized functions ( p ( x ) ≥ 0 , • conditional pdf p ( x | y ) = p ( x,y ) p ( y ) : Impact of information on y on RV x ? � dy p ( x, y ) = � dy p ( x | y ) p ( y ) : • marginal density p ( x ) = Enter y ! • Bayes: p ( x | y )= p ( y | x ) p ( x ) p ( y | x ) p ( x ) p ( x | y ) ← p ( y | x ) , p ( x ) ! = dx p ( y | x ) p ( x ) : � p ( y ) Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 5

  6. How to deal with probability density functions? • pdf p ( x ) : Extract probability statements about the RV x by integration! � dx p ( x ) = 1 ) • na¨ ıvely: positive and normalized functions ( p ( x ) ≥ 0 , • conditional pdf p ( x | y ) = p ( x,y ) p ( y ) : Impact of information on y on RV x ? � dy p ( x, y ) = � dy p ( x | y ) p ( y ) : • marginal density p ( x ) = Enter y ! • Bayes: p ( x | y )= p ( y | x ) p ( x ) p ( y | x ) p ( x ) p ( x | y ) ← p ( y | x ) , p ( x ) ! = dx p ( y | x ) p ( x ) : � p ( y ) ( x − y )2 2 πσ e − 1 1 2 σ 2 • certain knowledge on x : p ( x ) = δ ( x − y ) ‘ = ’ lim σ → 0 √ Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 6

  7. How to deal with probability density functions? • pdf p ( x ) : Extract probability statements about the RV x by integration! � dx p ( x ) = 1 ) • na¨ ıvely: positive and normalized functions ( p ( x ) ≥ 0 , • conditional pdf p ( x | y ) = p ( x,y ) p ( y ) : Impact of information on y on RV x ? � dy p ( x, y ) = � dy p ( x | y ) p ( y ) : • marginal density p ( x ) = Enter y ! • Bayes: p ( x | y )= p ( y | x ) p ( x ) p ( y | x ) p ( x ) p ( x | y ) ← p ( y | x ) , p ( x ) ! = dx p ( y | x ) p ( x ) : � p ( y ) ( x − y )2 2 πσ e − 1 1 2 σ 2 • certain knowledge on x : p ( x ) = δ ( x − y ) ‘ = ’ lim σ → 0 √ � dx p ( y, x ) = � dx p ( y | x ) p x ( x ) • transformed RV y = t [ x ] : p ( y ) = Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 7

  8. How to deal with probability density functions? • pdf p ( x ) : Extract probability statements about the RV x by integration! � dx p ( x ) = 1 ) • na¨ ıvely: positive and normalized functions ( p ( x ) ≥ 0 , • conditional pdf p ( x | y ) = p ( x,y ) p ( y ) : Impact of information on y on RV x ? � dy p ( x, y ) = � dy p ( x | y ) p ( y ) : • marginal density p ( x ) = Enter y ! • Bayes: p ( x | y )= p ( y | x ) p ( x ) p ( y | x ) p ( x ) p ( x | y ) ← p ( y | x ) , p ( x ) ! = dx p ( y | x ) p ( x ) : � p ( y ) ( x − y )2 2 πσ e − 1 1 2 σ 2 • certain knowledge on x : p ( x ) = δ ( x − y ) ‘ = ’ lim σ → 0 √ � dx p ( y, x ) = � dx p ( y | x ) p x ( x ) = • transformed RV y = t [ x ] : p ( y ) = � dx δ ( y − t [ x ]) p x ( x ) =: [ T p x ]( y ) ( T : p x �→ p , “transfer operator”) Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 8

  9. The Multivariate G AUSS ian Pdf – wanted: probabilities ‘concentrated’ around a center x q ( x ) = 1 2 ( x − x ) P − 1 ( x − x ) ⊤ – quadratic distance: q ( x ) defines an ellipsoid around x , its volume and orien- tation being determined by a matrix P (symmetric: P ⊤ = P , positively definite: all eigenvalues > 0 ). � dx e − q ( x ) (normalized!) p ( x ) = e − q ( x ) / – first attempt: e − 1 1 2( x − x ) ⊤ P − 1 ( x − x ) p ( x ) = N ( x ; x , P ) = � | 2 π P | � Show: � dx e − q ( x ) = E [( x − x )( x − x ) ⊤ ] = P Exercise 3.2 | 2 π P | , E [ x ] = x , Trick: Symmetric, positively definite matrices can be diagonalized by an orthogonal coordinate transform. Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 9

  10. The Multivariate G AUSS ian Pdf – wanted: probabilities ‘concentrated’ around a center x q ( x ) = 1 2 ( x − x ) P − 1 ( x − x ) ⊤ – quadratic distance: q ( x ) defines an ellipsoid around x , its volume and orien- tation being determined by a matrix P (symmetric: P ⊤ = P , positively definite: all eigenvalues > 0 ). � dx e − q ( x ) (normalized!) p ( x ) = e − q ( x ) / – first attempt: e − 1 1 2( x − x ) ⊤ P − 1 ( x − x ) p ( x ) = N ( x ; x , P ) = � | 2 π P | E [ x ] = x , E [( x − x )( x − x ) ⊤ ] = P (covariance) Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 10

  11. The Multivariate G AUSS ian Pdf – wanted: probabilities ‘concentrated’ around a center x q ( x ) = 1 2 ( x − x ) P − 1 ( x − x ) ⊤ – quadratic distance: q ( x ) defines an ellipsoid around x , its volume and orien- tation being determined by a matrix P (symmetric: P ⊤ = P , positively definite: all eigenvalues > 0 ). � dx e − q ( x ) (normalized!) p ( x ) = e − q ( x ) / – first attempt: e − 1 1 2( x − x ) ⊤ P − 1 ( x − x ) p ( x ) = N ( x ; x , P ) = � | 2 π P | E [ x ] = x , E [( x − x )( x − x ) ⊤ ] = P (covariance) – Covariance Matrix: Expected error of the expectation. Sensor Data Fusion - Methods and Applications, 3rd Lecture on October 30, 2019 — slide 11

Recommend


More recommend