Outline Need for Data Processing Need to Take into . . . Case When We Know . . . Robust Data Processing in What If We Only Have . . . the Presence of Uncertainty What If We Only Have . . . Need for Interval . . . and Outliers: Case of What If We Have No . . . How to Take Both . . . Localization Problems Home Page Anthony Welte 1 , Luc Jaulin 1 , Title Page Martine Ceberio 2 , and Vladik Kreinovich 2 ◭◭ ◮◮ 1 Lab STICC, ´ Ecole Nationale Sup´ erieure de Techniques ◭ ◮ Avanc´ ees Bretagne (ENSTA Bretagne), 2 rue Fran¸ cois Verny Page 1 of 25 29806 Brest, France, tony.welte@gmail.com, lucjaulin@gmail.com Go Back 2 Department of Computer Science, University of Texas at El Paso El Paso, Texas 79968, USA, mceberio@utep.edu, vladik@utep.edu Full Screen Close Quit
Outline Need for Data Processing 1. Outline Need to Take into . . . • To properly process data, we need to take into account: Case When We Know . . . What If We Only Have . . . – the measurement errors and What If We Only Have . . . – the fact that some of the observations may be out- Need for Interval . . . liers. What If We Have No . . . • This is especially important in radar-based localiza- How to Take Both . . . tion, where some signals may reflect: Home Page – not from the analyzed object, Title Page – but from some nearby object. ◭◭ ◮◮ • There are known methods for situations when we have ◭ ◮ full information about the probabilities. Page 2 of 25 • There are methods for dealing with measurement er- Go Back rors when we only have partial info about prob. Full Screen • In this talk, we extend these methods to situations with outliers. Close Quit
Outline Need for Data Processing 2. Need for Data Processing Need to Take into . . . • We are often interested in the quantities p 1 , . . . , p m Case When We Know . . . which are difficult to measure directly. What If We Only Have . . . What If We Only Have . . . • We find a measurable quantity y that depends on p i Need for Interval . . . and settings x j : y = f ( p 1 , . . . , p m , x 1 , . . . , x n ) . What If We Have No . . . • For example, locating an object (robot, satellite, etc.), How to Take Both . . . means finding its coordinates p 1 , . . . Home Page • We cannot directly measure coordinates, but we can Title Page � 3 � ( p i − x i ) 2 . ◭◭ ◮◮ measure, e.g., a distance y = i =1 ◭ ◮ • In general, we measure y k under different settings Page 3 of 25 ( x k 1 , . . . ), and reconstruct p i from the condition Go Back y k = f ( p 1 , . . . , p m , x k 1 , . . . , x kn ) . Full Screen • This is an important case of data processing . Close Quit
Outline Need for Data Processing 3. Need to Take into Account Measurement Un- Need to Take into . . . certainty and Outliers Case When We Know . . . • Measurement are never absolutely accurate. What If We Only Have . . . What If We Only Have . . . • There is always a non-zero difference between the mea- Need for Interval . . . surement result y k and the actual (unknown) value: What If We Have No . . . def ∆ y k = y k − f ( p 1 , . . . , p m , x k 1 , . . . , x kn ) � = 0 . How to Take Both . . . Home Page • Sometimes, the measuring instrument malfunctions. Title Page • Then, we get outliers – values which are very different ◭◭ ◮◮ from the actual quantity. ◭ ◮ • This is especially important in radar-based localiza- Page 4 of 25 tion, where some signals may reflect: Go Back – not from the analyzed object, Full Screen – but from some nearby object. Close Quit
Outline Need for Data Processing 4. Case When We Know the Probability Distri- Need to Take into . . . bution ρ (∆ y ) of the Measurement Error Case When We Know . . . • In this case, for each p , the probability to observe y k is What If We Only Have . . . proportional to ρ (∆ y k ) = ρ ( y k − f ( p, x k )). What If We Only Have . . . Need for Interval . . . • Measurement errors corresponding to different mea- What If We Have No . . . surements are usually independent. How to Take Both . . . • So, the prob. of observing all the observed values Home Page K y 1 , . . . , y K is equal to the product � ρ ( y k − f ( p, x k )). Title Page k =1 ◭◭ ◮◮ • It is reasonable to select the most probable value p , for which this product is the largest. ◭ ◮ • This idea is known as the Maximum Likelihood Method . Page 5 of 25 Go Back • For Gaussian distributions, this leads to the usual K (∆ y k ) 2 → min . Full Screen � Least Squares Method k =1 Close Quit
Outline Need for Data Processing 5. What If We Only Have Partial Information Need to Take into . . . About the Probabilities: First Case Case When We Know . . . • Sometimes, we know that the probability distribution What If We Only Have . . . belongs has the form ρ (∆ y, θ ) for some θ = ( θ 1 , . . . , θ ℓ ). What If We Only Have . . . Need for Interval . . . • In this case, the corresponding “likelihood function” L K What If We Have No . . . takes the form L = � ρ (∆ y k , θ ). How to Take Both . . . k =1 Home Page • We then select a pair ( p, θ ) for which the probability is Title Page the largest: ◭◭ ◮◮ K � L = ρ ( y k − f ( p, x k ) , θ ) → max p,θ . ◭ ◮ k =1 Page 6 of 25 Go Back Full Screen Close Quit
Outline Need for Data Processing 6. What If We Only Have Partial Information Need to Take into . . . About the Probabilities: Non-Parametric Case Case When We Know . . . • In many practical situations, we do not know the finite- What If We Only Have . . . parametric family containing the actual distribution. What If We Only Have . . . Need for Interval . . . • Each possible distribution ρ (∆ y ) can be characterized � What If We Have No . . . by its entropy S = − ρ (∆ y ) · ln( ρ (∆ y )) d ∆ y. How to Take Both . . . • Entropy describes how many binary questions we need Home Page to ask to uniquely determine ∆ y . Title Page • We want to select a distribution that to the largest ◭◭ ◮◮ extent reflects this uncertainty. ◭ ◮ • In other words, it is reasonable to select a distribution Page 7 of 25 for which the entropy is the largest possible. Go Back • For example, among the distributions ρ (∆ y ) located on [ − ∆ , ∆], uniform distribution has the largest entropy. Full Screen Close Quit
Outline Need for Data Processing 7. Need for Interval Computations Need to Take into . . . • For uniform distributions: Case When We Know . . . What If We Only Have . . . – the value ρ (∆ y k ) = 0 if ∆ y k is outside the interval What If We Only Have . . . [ − ∆ , ∆] and Need for Interval . . . – it is equal to a constant when ∆ y k is inside this What If We Have No . . . interval. How to Take Both . . . • Thus, the product L of these probabilities is constant Home Page when | ∆ y k | ≤ ∆ for all k . Title Page • So, instead of a single tuple p , we now need to describe ◭◭ ◮◮ all the tuples p for which ◭ ◮ | y k − f ( p, x k ) | ≤ ∆ for all k = 1 , . . . , k. Page 8 of 25 • This is a particular case of interval computations . Go Back Full Screen Close Quit
Outline Need for Data Processing 8. What If We Have No Information About the Need to Take into . . . Probabilities of Measurement Errors Case When We Know . . . • This situation is similar to the previous one, except What If We Only Have . . . that now, we do not know the bound ∆. What If We Only Have . . . Need for Interval . . . • A reasonable idea is to select ∆ for which the corre- 1 What If We Have No . . . sponding likelihood L = (2∆) K is the largest possible. How to Take Both . . . • Selecting the largest possible L is equivalent to select- Home Page ing the smallest possible ∆. Title Page • The only constraint on ∆ is that ∆ ≥ | ∆ y k | for all k . ◭◭ ◮◮ • The smallest ∆ satisfying it is ∆ = max | ∆ y k | . ◭ ◮ k Page 9 of 25 • Thus, minimizing ∆ means selecting p for which max | ∆ y k | = max | y k − f ( p, x k ) | is the smallest. Go Back k k Full Screen • This minimax approach is indeed frequently used in data processing. Close Quit
Outline Need for Data Processing 9. How to Take Both Uncertainty and Outliers Need to Take into . . . into Account Case When We Know . . . • We considered 4 cases: What If We Only Have . . . What If We Only Have . . . – we know the exact distribution; Need for Interval . . . – we know the finite-parametric family of distribu- What If We Have No . . . tions; How to Take Both . . . – we know the upper bound on the (absolute value) Home Page of the corresponding difference; and Title Page – we have no information whatsoever, not even the upper bound. ◭◭ ◮◮ • In principle, we may have the same four possible types ◭ ◮ of information about the outlier probabilities ρ 0 (∆ y ). Page 10 of 25 • At first glance, it may therefore seem that we can have Go Back 4 × 4 = 16 possible combinations. Full Screen • In reality, however, not all such combinations are pos- Close sible. Quit
Recommend
More recommend