description of the detection process
play

Description of the Detection Process Detektor: receives signals and - PowerPoint PPT Presentation

Description of the Detection Process Detektor: receives signals and decides on object existence Processor: processes detected signals and produces measurements D : detector detects an object D : object actually existent Sensor Data Fusion -


  1. Description of the Detection Process Detektor: receives signals and decides on object existence Processor: processes detected signals and produces measurements ‘ D ’: detector detects an object D : object actually existent Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  2. Description of the Detection Process Detektor: receives signals and decides on object existence Processor: processes detected signals and produces measurements P I = P ( ¬ ‘ D ’ | D ) ‘ D ’: detector detects an object error of 1. kind: P II = P ( ‘ D ’ |¬ D ) D : object actually existent error of 2. kind: measure of detection performance: P D = P ( ‘ D ’ | D ) detector properties characterized by two parameters: − detection probability P D = 1 − P I − false alarm probability P F = P II Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  3. Description of the Detection Process Detektor: receives signals and decides on object existence Processor: processes detected signals and produces measurements P I = P ( ¬ ‘ D ’ | D ) ‘ D ’: detector detects an object error of 1. kind: P II = P ( ‘ D ’ |¬ D ) D : object actually existent error of 2. kind: measure of detection performance: P D = P ( ‘ D ’ | D ) detector properties characterized by two parameters: − detection probability P D = 1 − P I − false alarm probability P F = P II P D = P D ( P F , SNR ) = P 1 / (1+ SNR ) example (Swerling I model): F detector design: Maximize detection probability P D for a given, predefined false alarm probability P F ! Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  4. Likelihood Functions The likelihood function answers the question: What does the sensor tell about the state x of the object? (input: sensor data, sensor model) • ideal conditions, one object: P D = 1 , ρ F = 0 p ( z k | x k ) = N ( z k ; Hx k , R ) at each time one measurement: • real conditions, one object: P D < 1 , ρ F > 0 k , . . . , z n k at each time n k measurements Z k = { z 1 k } ! n k � � � z j p ( Z k , n k | x k ) ∝ (1 − P D ) ρ F + P D N k ; Hx k , R j =1 4 Introduction to Sensor Daten Fusion: Methods and Applications — 10th Lecture on January 16, 2019

  5. Bayes Filtering for: P D < 1 , ρ F > 0 , well-separated objects accumulated data Z k = { Z k , Z k − 1 } current data Z k = { z j k } m k state x k , j =1 , interpretation hypotheses E k for Z k � object not detected, 1 − P D m k + 1 interpretations z k ∈ Z k from object, P D • tree structure: H k = ( E H k , H k − 1 ) ∈ H k interpretation histories H k for Z k • current: E H k , pre histories: H k − i � � p � x k | Z k � p � x k , H k | Z k � � H k | Z k � p � x k | H k , Z k � = = p ‘mixture’ density � �� � � �� � H k H k weight! given H k : unique 5 Introduction to Sensor Daten Fusion: Methods and Applications — 10th Lecture on January 16, 2019

  6. Closer look: P D < 1 , ρ F > 0 , well-separated targets � � � p ( x k − 1 |Z k − 1 ) = p H k − 1 N x k − 1 ; x H k − 1 , P H k − 1 filtering (at time t k − 1 ): H k − 1 prediction (for time t k ): � p ( x k |Z k − 1 ) d x k − 1 p ( x k | x k − 1 ) p ( x k − 1 |Z k − 1 ) = (M ARKOV model) � p H k − 1 N � x k ; Fx H k − 1 , FP H k − 1 F ⊤ + D � = (IMM also possible) H k − 1 measurement likelihood: m k � p ( Z k | E j k , x k , m k ) P ( E j ( E j p ( Z k , m k | x k ) = k | x k , m k ) k : interpretations) j =0 m k � � � z j ∝ (1 − P D ) ρ F + P D N k ; Hx k , R ( H , R , P D , ρ F ) j =1 filtering (at time t k ): p ( x k |Z k ) p ( Z k , m k | x k ) p ( x k |Z k − 1 ) ∝ (B AYES ’ rule) � � � = p H k N x k ; x H k , P H k (Exploit product formula) H k Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  7. Problem: Growing Memory Disaster: m data, N hypotheses → N m +1 continuations radical solution: mono-hypothesis approximation Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  8. Problem: Growing Memory Disaster: m data, N hypotheses → N m +1 continuations radical solution: mono-hypothesis approximation • gating: Exclude competing data with || ν i k | k − 1 || > λ ! → K ALMAN filter (KF) + very simple, − λ too small: loss of target measurement Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  9. Problem: Growing Memory Disaster: m data, N hypotheses → N m +1 continuations radical solution: mono-hypothesis approximation • gating: Exclude competing data with || ν i k | k − 1 || > λ ! → K ALMAN filter (KF) + very simple, − λ too small: loss of target measurement • Force a unique interpretation in case of a conflict! look for smallest statistical distance: min i || ν i k | k − 1 || → Nearest-Neighbor filter (NN) Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  10. Problem: Growing Memory Disaster: m data, N hypotheses → N m +1 continuations radical solution: mono-hypothesis approximation • gating: Exclude competing data with || ν i k | k − 1 || > λ ! → K ALMAN filter (KF) + very simple, − λ too small: loss of target measurement • Force a unique interpretation in case of a conflict! look for smallest statistical distance: min i || ν i k | k − 1 || Nearest-Neighbor filter (NN) → + one hypothesis, − hard decision, − not adaptive • global combining: Merge all hypotheses! → PDAF, JPDAF filter + all data, + adaptive, − reduced applicability Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  11. PDAF Filter: formally analog to Kalman Filter p ( x k − 1 |Z k − 1 ) = N ( x k − 1 ; x k − 1 | k − 1 , P k − 1 | k − 1 ) ( → initiation) Filtering (scan k − 1 ): p ( x k |Z k − 1 ) ≈ N ( x k ; x k | k − 1 , P k | k − 1 ) (like Kalman) prediction (scan k ): m k � p j k N ( x k ; x j k | k , P j p ( x k |Z k ) ≈ Filtering (scan k ): k | k ) ≈ N ( x k ; x k | k , P k | k ) j =0 � m k j =0 p j k ν j k , ν j = z j k − Hx k | k − 1 = combined innovation ν k k = HP k | k − 1 H ⊤ + R k = P k | k − 1 H ⊤ S − 1 k , Kalman gain matrix W k S k � (1 − P D ) ρ F k / � p j j p j ∗ p j ∗ = p i ∗ = k , weighting factors | 2 π S Hk | e − 1 2 ν ⊤ P D Hk S Hk ν Hk √ k k x k = x k | k − 1 + W k ν k (Filtering Update: Kalman) P k | k − 1 − (1 − p 0 k ) W k SW ⊤ P k = (Kalman part) k � � m k − ν k ν k ⊤ � j =0 p j k ν j k ν j ⊤ W ⊤ + W k (Spread of Innovations) k k 11 Introduction to Sensor Daten Fusion: Methods and Applications — 10th Lecture on January 16, 2019

  12. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  13. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation • individual gating: Exclude irrelevant data ! before continuing existing track hypotheses H k − 1 → limiting case: K ALMAN filter (KF) Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  14. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation • individual gating: Exclude irrelevant data ! before continuing existing track hypotheses H k − 1 → limiting case: K ALMAN filter (KF) • pruning: Kill hypotheses of very small weight ! after calculating the weights p H k , before filtering → limiting case: Nearest Neighbor filter (NN) Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  15. The qualitative shape of p ( x k |Z k ) is often much simpler than its correct representation: a few pronounced modes adaptive solution: nearly optimal approximation • individual gating: Exclude irrelevant data ! before continuing existing track hypotheses H k − 1 → limiting case: K ALMAN filter (KF) • pruning: Kill hypotheses of very small weight ! after calculating the weights p H k , before filtering → limiting case: Nearest Neighbor filter (NN) • local combining: Merge similar hypotheses ! after the complete calculation of the pdfs → limiting case: PDAF (global combining) Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

  16. Retrodiction of Hypotheses’ Weights Consider approximation: neglect RTS step! p ( x l | H k , Z k ) = N � x l ; x H k ( l | k ) , P H k ( l | k ) � ≈ N � x l ; x H k ( l | l ) , P H k ( l | l ) � � � � p ( x l | H k , Z k ) ≈ p ∗ H l N x l ; x H k ( l | l ) , P H k ( l | l ) H l H l = � p ∗ p ∗ p ∗ H k = p H k , H l +1 with recursively defined weights: summation over all histories H l +1 with equal pre-histories! • Strong sons strengthen weak fathers. • Weak sons weaken even strong fathers. • If all sons die, also the father must die. Sensor Data Fusion - Methods and Applications, 10th Lecture on January 16, 2019

Recommend


More recommend