likelihood functions
play

Likelihood Functions The likelihood function answers the question: - PowerPoint PPT Presentation

Likelihood Functions The likelihood function answers the question: What does the sensor tell about the state x of the object? (input: sensor data, sensor model) ideal conditions, one object: P D = 1 , F = 0 p ( z k | x k ) = N ( z k ; Hx k


  1. Likelihood Functions The likelihood function answers the question: What does the sensor tell about the state x of the object? (input: sensor data, sensor model) • ideal conditions, one object: P D = 1 , ρ F = 0 p ( z k | x k ) = N ( z k ; Hx k , R ) at each time one measurement: • real conditions, one object: P D < 1 , ρ F > 0 k , . . . , z n k at each time n k measurements Z k = { z 1 k } ! n k X N � z j k ; Hx k , R � p ( Z k , n k | x k ) / (1 � P D ) ρ F + P D j =1 1 Introduction to Sensor Daten Fusion: Methods and Applications — 9th Lecture on June 20, 2018

  2. Bayes Filtering for: P D < 1 , ρ F > 0 , well-separated objects accumulated data Z k = { Z k , Z k � 1 } current data Z k = { z j k } m k state x k , j =1 , interpretation hypotheses E k for Z k � object not detected, 1 � P D m k + 1 interpretations z k 2 Z k from object, P D • tree structure: H k = ( E H k , H k � 1 ) 2 H k interpretation histories H k for Z k • current: E H k , pre histories: H k � i X X � x k | Z k � � x k , H k | Z k � � H k | Z k � p � x k | H k , Z k � p = p = p ‘mixture’ density | {z } | {z } H k H k weight! given H k : unique 2 Introduction to Sensor Daten Fusion: Methods and Applications — 9th Lecture on June 20, 2018

  3. Closer look: P D < 1 , ρ F > 0 , well-separated targets X � � p ( x k � 1 |Z k � 1 ) = p H k � 1 N x k � 1 ; x H k � 1 , P H k � 1 filtering (at time t k � 1 ): H k � 1 prediction (for time t k ): Z p ( x k |Z k � 1 ) d x k � 1 p ( x k | x k � 1 ) p ( x k � 1 |Z k � 1 ) = (M ARKOV model) X � x k ; Fx H k � 1 , FP H k � 1 F > + D � = p H k � 1 N H k � 1 measurement likelihood: m k X p ( Z k | E j k , x k , m k ) P ( E j ( E j p ( Z k , m k | x k ) = k | x k , m k ) k : interpretations) j =0 m k X � � z j / (1 � P D ) ρ F + P D N k ; Hx k , R ( H , R , P D , ρ F ) j =1 filtering (at time t k ): p ( x k |Z k ) p ( Z k , m k | x k ) p ( x k |Z k � 1 ) / (B AYES ’ rule) X � � = p H k N x k ; x H k , P H k (Exploit product formula) H k Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  4. Show that p ( x k |Z k ) is given by Kalman updates and weights p j Exercise: H k . m k ⇣ ⌘ X X p j x k � 1 ; x j H k � 1 , P j p ( x k |Z k ) = H k � 1 N H k � 1 j =0 H k � 1 p j ⇤ H k � 1 p j = H k � 1 j p j ⇤ P H k � 1 8 (1 � P D ) ρ F j =0 > > > > Hk S j � 1 < 2 ν j > Hk � 1 ν j � 1 p j ⇤ = p H k � 1 P D Hk e H k � 1 j 6 =0 r > > | 2 π S j > Hk � 1 | > : Insert mixtures and exploit product formula in the numerator and denominator! Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  5. Problem: Growing Memory Disaster: m data, N hypotheses ! N m +1 continuations radical solution: mono-hypothesis approximation Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  6. Problem: Growing Memory Disaster: m data, N hypotheses ! N m +1 continuations radical solution: mono-hypothesis approximation • gating: Exclude competing data with || ν i k | k � 1 || > λ ! ! K ALMAN filter (KF) + very simple, � λ too small: loss of target measurement Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  7. Problem: Growing Memory Disaster: m data, N hypotheses ! N m +1 continuations radical solution: mono-hypothesis approximation • gating: Exclude competing data with || ν i k | k � 1 || > λ ! ! K ALMAN filter (KF) + very simple, � λ too small: loss of target measurement • Force a unique interpretation in case of a conflict! look for smallest statistical distance: min i || ν i k | k � 1 || ! Nearest-Neighbor filter (NN) Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  8. Problem: Growing Memory Disaster: m data, N hypotheses ! N m +1 continuations radical solution: mono-hypothesis approximation • gating: Exclude competing data with || ν i k | k � 1 || > λ ! ! K ALMAN filter (KF) + very simple, � λ too small: loss of target measurement • Force a unique interpretation in case of a conflict! look for smallest statistical distance: min i || ν i k | k � 1 || Nearest-Neighbor filter (NN) ! + one hypothesis, � hard decision, � not adaptive • global combining: Merge all hypotheses! ! PDAF, JPDAF filter + all data, + adaptive, � reduced applicability Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  9. PDAF Filter: formally analog to Kalman Filter p ( x k � 1 |Z k � 1 ) = N ( x k � 1 ; x k � 1 | k � 1 , P k � 1 | k � 1 ) ( ! initiation) Filtering (scan k � 1 ): p ( x k |Z k � 1 ) ⇡ N ( x k ; x k | k � 1 , P k | k � 1 ) (like Kalman) prediction (scan k ): m k X p j k N ( x k ; x j k | k , P j p ( x k |Z k ) ⇡ Filtering (scan k ): k | k ) ⇡ N ( x k ; x k | k , P k | k ) j =0 P m k j =0 p j k ν j k , ν j = z j = k � Hx k | k � 1 combined innovation ν k k = HP k | k � 1 H > + R k = P k | k � 1 H > S � 1 k , Kalman gain matrix W k S k ( (1 � P D ) ρ F k / P p j j p j ⇤ p j ⇤ = p i ⇤ = k , weighting factors | 2 π S Hk | e � 1 2 ν > P D Hk S Hk ν Hk p k k x k = x k | k � 1 + W k ν k (Filtering Update: Kalman) P k | k � 1 � (1 � p 0 k ) W k SW > P k = (Kalman part) k n P m k � ν k ν k > o j =0 p j k ν j k ν j > W > + W k (Spread of Innovations) k k 9 Introduction to Sensor Daten Fusion: Methods and Applications — 9th Lecture on June 20, 2018

  10. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  11. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation • individual gating: Exclude irrelevant data ! before continuing existing track hypotheses H k � 1 ! limiting case: K ALMAN filter (KF) Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  12. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation • individual gating: Exclude irrelevant data ! before continuing existing track hypotheses H k � 1 ! limiting case: K ALMAN filter (KF) • pruning: Kill hypotheses of very small weight ! after calculating the weights p H k , before filtering ! limiting case: Nearest Neighbor filter (NN) Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  13. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation • individual gating: Exclude irrelevant data ! before continuing existing track hypotheses H k � 1 ! limiting case: K ALMAN filter (KF) • pruning: Kill hypotheses of very small weight ! after calculating the weights p H k , before filtering ! limiting case: Nearest Neighbor filter (NN) • local combining: Merge similar hypotheses ! after the complete calculation of the pdfs ! limiting case: PDAF (global combining) Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  14. Successive Local Combining Partial sums of similar densities ! moment matching: X p H k N ( x k ; x H k , P H k ) ⇡ p H ⇤ k N ( x k ; x H ⇤ k , P H ⇤ k ) H k 2 H k ⇤ H k ⇤ ⇢ H k H ⇤ ! k : effective hypothesis Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  15. Successive Local Combining Partial sums of similar densities ! moment matching: X p H k N ( x k ; x H k , P H k ) ⇡ p H ⇤ k N ( x k ; x H ⇤ k , P H ⇤ k ) H k 2 H k ⇤ H k ⇤ ⇢ H k H ⇤ ! k : effective hypothesis similarity: d ( H 1 , H 2 ) < µ mit (z.B.): d ( H 1 , H 2 ) = ( x H 1 � x H 2 ) > ( P H 1 + P H 2 ) � 1 ( x H 1 � x H 2 ) Start: Hypothesis of highest weight H 1 ! search similar hypothesis ( p H & ) ! merge: ( H 1 , H ) � H ⇤ 1 ! continue search ( p H & ) . . . ! restart: hypothesis with next to highest weight H 2 ! . . . Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

  16. Successive Local Combining Partial sums of similar densities ! moment matching: X p H k N ( x k ; x H k , P H k ) ⇡ p H ⇤ k N ( x k ; x H ⇤ k , P H ⇤ k ) H k 2 H k ⇤ H k ⇤ ⇢ H k H ⇤ ! k : effective hypothesis similarity: d ( H 1 , H 2 ) < µ mit (z.B.): d ( H 1 , H 2 ) = ( x H 1 � x H 2 ) > ( P H 1 + P H 2 ) � 1 ( x H 1 � x H 2 ) Start: Hypothesis of highest weight H 1 ! search similar hypothesis ( p H & ) ! merge: ( H 1 , H ) � H ⇤ 1 ! continue search ( p H & ) . . . ! restart: hypothesis with next to highest weight H 2 ! . . . • In many cases: good approximations ! quasi-optimality • PDAF, JPDAF: H k ⇤ = H k ! limited applicability • robustness ! detail mostly irrelevant Sensor Data Fusion - Methods and Applications, 9th Lecture on June 20, 2018

Recommend


More recommend