Practical Need for . . . Cauchy Deviate . . . Towards A Neural-Based Cauchy Deviate . . . Werbos’s Idea: Use . . . Understanding of the We Must Choose a . . . Cauchy Deviate Method for Main Result Processing Interval and Title Page ◭◭ ◮◮ Fuzzy Uncertainty ◭ ◮ Vladik Kreinovich 1 and Hung T. Nguyen 2 Page 1 of 100 Go Back 1 Department of Computer Science University of Texas, El Paso, TX 79968, USA, vladik@utep.edu Full Screen 2 Department of Mathematical Sciences, New Mexico State University, Las Cruces, NM 88003, USA, hunguyen@nmsu.edu Close Quit
1. Practical Need for Uncertainty Propagation Practical Need for . . . Cauchy Deviate . . . • Practical problem: we are often interested in the quan- Cauchy Deviate . . . tity y which is difficult to measure directly. Werbos’s Idea: Use . . . • Solution: We Must Choose a . . . – estimate easier-to-measure quantities x 1 , . . . , x n which Main Result are related to y by a known algorithm y = f ( x 1 , . . . , x n ); Title Page – compute � y = f ( � x 1 , . . . , � x n ) based on the estimates � x i . ◭◭ ◮◮ • Fact: estimates are never absolutely accurate: � x i � = x i . ◭ ◮ • Consequence: the estimate � y = f ( � x 1 , . . . , � x n ) is differ- Page 2 of 100 ent from the actual value y = f ( x 1 , . . . , x n ). Go Back def • Problem: estimate the uncertainty ∆ y = � y − y . Full Screen Close Quit
2. Propagation of Probabilistic Uncertainty Practical Need for . . . Cauchy Deviate . . . • Fact: often, we know the probabilities of different val- Cauchy Deviate . . . ues of ∆ x i . Werbos’s Idea: Use . . . • Example: ∆ x i are independent normally distributed We Must Choose a . . . with mean 0 and known st. dev. σ i . Main Result • Monte-Carlo approach: Title Page – For k = 1 , . . . , N times, we: ◭◭ ◮◮ ∗ simulate the values ∆ x ( k ) according to the known i probability distributions for x i ; ◭ ◮ ∗ find x ( k ) x i − ∆ x ( k ) = � i ; Page 3 of 100 i ∗ find y ( k ) = f ( x ( k ) 1 , . . . , x ( k ) n ); Go Back ∗ estimate ∆ y ( k ) = y ( k ) − � y . Full Screen – Based on the sample ∆ y (1) , . . . , ∆ y ( N ) , we estimate Close the statistical characteristics of ∆ y . Quit
3. Propagation of Interval Uncertainty Practical Need for . . . Cauchy Deviate . . . • In practice: we often do not know the probabilities. Cauchy Deviate . . . • What we know: the upper bounds ∆ i on the measure- Werbos’s Idea: Use . . . ment errors ∆ x i : | ∆ x i | ≤ ∆ i . We Must Choose a . . . • Enter intervals: once we know � x i , we conclude that the Main Result actual (unknown) x i is in the interval Title Page x i = [ � x i − ∆ i , � x i + ∆ i ] . ◭◭ ◮◮ • Problem: find the range y = [ y, y ] of possible values of ◭ ◮ y when x i ∈ x i : Page 4 of 100 def y = f ( x 1 , . . . , x n ) = Go Back { f ( x 1 , . . . , x n ) | x 1 ∈ x 1 , . . . , x n ∈ x n } . Full Screen Close • Fact: this interval computation problem is, in general, NP-hard. Quit
4. Propagation of Fuzzy Uncertainty Practical Need for . . . Cauchy Deviate . . . • In many practical situations, the estimates � x i come Cauchy Deviate . . . from experts. Werbos’s Idea: Use . . . • Experts often describe the inaccuracy of their estimates We Must Choose a . . . by natural language terms like “approximately 0.1”. Main Result • A natural way to formalize such terms is to use mem- Title Page bership functions µ i ( x i ). ◭◭ ◮◮ • For each α , we can determine the α -cut ◭ ◮ x i ( α ) = { x i | µ i ( x i ) ≥ α } . Page 5 of 100 • Natural idea: find µ ( y ) for which, for each α , Go Back y ( α ) = f ( x 1 ( α ) , . . . , x 1 ( α )) . Full Screen Close • So, the problem of propagating fuzzy uncertainty can be reduced to several interval propagation problems. Quit
5. Need for Faster Algorithms for Uncertainty Prop- Practical Need for . . . agation Cauchy Deviate . . . Cauchy Deviate . . . • For propagating probabilistic uncertainty, there are ef- Werbos’s Idea: Use . . . ficient algorithms such as Monte-Carlo simulations. We Must Choose a . . . • In contrast, the problems of propagating interval and Main Result fuzzy uncertainty are computationally difficult. Title Page • It is therefore desirable to design faster algorithms for propagating interval and fuzzy uncertainty. ◭◭ ◮◮ • The problem of propagating fuzzy uncertainty can be ◭ ◮ reduced to the interval case. Page 6 of 100 • Hence, we mainly concentrate on faster algorithms for Go Back propagating interval uncertainty. Full Screen Close Quit
6. Linearization Practical Need for . . . Cauchy Deviate . . . • In many practical situations, the errors ∆ x i are small, Cauchy Deviate . . . so we can ignore quadratic terms: Werbos’s Idea: Use . . . ∆ y = � y − y = f ( � x 1 , . . . , � x n ) − f ( x 1 , . . . , x n ) = We Must Choose a . . . f ( � x 1 , . . . , � x n ) − f ( � x 1 − ∆ x 1 , . . . , � x n − ∆ x n ) ≈ Main Result c 1 · ∆ x 1 + . . . + c n · ∆ x n , Title Page = ∂f def where c i ( � x 1 , . . . , � x n ) . ∂x i ◭◭ ◮◮ • For a linear function, the largest ∆ y is obtained when ◭ ◮ each term c i · ∆ x i is the largest: Page 7 of 100 ∆ = | c 1 | · ∆ 1 + . . . + | c n | · ∆ n . Go Back • Due to the linearization assumption, we can estimate Full Screen each partial derivative c i as Close c i ≈ f ( � x 1 , . . . , � x i − 1 , � x i + h i , � x i +1 , . . . , � x n ) − � y . Quit h i
7. Linearization: Algorithm Practical Need for . . . Cauchy Deviate . . . To compute the range y of y , we do the following. Cauchy Deviate . . . • First, we apply the algorithm f to the original esti- Werbos’s Idea: Use . . . mates � x 1 , . . . , � x n , resulting in the value � y = f ( � x 1 , . . . , � x n ). We Must Choose a . . . • Second, for all i from 1 to n , Main Result – we compute f ( � x 1 , . . . , � x i − 1 , � x i + h i , � x i +1 , . . . , � x n ) for Title Page some small h i and then ◭◭ ◮◮ – we compute ◭ ◮ c i = f ( � x 1 , . . . , � x i − 1 , � x i + h i , � x i +1 , . . . , � x n ) − � y . Page 8 of 100 h i Go Back • Finally, we compute ∆ = | c 1 | · ∆ 1 + . . . + | c n | · ∆ n and the desired range y = [ � y − ∆ , � y + ∆]. Full Screen • Problem: we need n +1 calls to f , and this is often too Close long. Quit
8. Cauchy Deviate Method: Idea Practical Need for . . . Cauchy Deviate . . . • For large n , we can further reduce the number of calls Cauchy Deviate . . . to f if we Cauchy distributions, w/pdf Werbos’s Idea: Use . . . ∆ ρ ( z ) = π · ( z 2 + ∆ 2 ) . We Must Choose a . . . Main Result • Known property of Cauchy transforms: Title Page – if z 1 , . . . , z n are independent Cauchy random vari- ◭◭ ◮◮ ables w/parameters ∆ 1 , . . . , ∆ n , ◭ ◮ – then z = c 1 · z 1 + . . . + c n · z n is also Cauchy dis- tributed, w/parameter Page 9 of 100 Go Back ∆ = | c 1 | · ∆ 1 + . . . + | c n | · ∆ n . Full Screen • This is exactly what we need to estimate interval un- Close certainty! Quit
9. Cauchy Deviate Method: Towards Implementation Practical Need for . . . Cauchy Deviate . . . • To implement the Cauchy idea, we must answer the Cauchy Deviate . . . following questions: Werbos’s Idea: Use . . . – how to simulate the Cauchy distribution; and We Must Choose a . . . – how to estimate the parameter ∆ of this distribu- Main Result tion from a finite sample. Title Page • Simulation can be based on the functional transforma- tion of uniformly distributed sample values: ◭◭ ◮◮ ◭ ◮ δ i = ∆ i · tan( π · ( r i − 0 . 5)) , where r i ∼ U ([0 , 1]) . Page 10 of 100 • To estimate ∆, we can apply the Maximum Likelihood Method ρ ( δ (1) ) · ρ ( δ (2) ) · . . . · ρ ( δ ( N ) ) → max , i.e., solve Go Back Full Screen 1 1 � 2 = N � 2 + . . . + 2 . � δ (1) � δ ( N ) Close 1 + 1 + ∆ ∆ Quit
10. Cauchy Deviates Method: Algorithm Practical Need for . . . Cauchy Deviate . . . • Apply f to � x i ; we get � y := f ( � x 1 , . . . , � x n ). Cauchy Deviate . . . • For k = 1 , 2 , . . . , N , repeat the following: Werbos’s Idea: Use . . . • use the standard RNG to draw r ( k ) ∼ U ([0 , 1]), We Must Choose a . . . i i = 1 , 2 , . . . , n ; Main Result • compute Cauchy distributed values c ( k ) := tan( π · ( r ( k ) Title Page − 0 . 5)); i i • compute K := max i | c ( k ) ◭◭ ◮◮ i | and normalized errors δ ( k ) := ∆ i · c ( k ) i /K ; ◭ ◮ i • compute the simulated “actual values” Page 11 of 100 x ( k ) x i − δ ( k ) := � i ; i Go Back • compute simulated errors of indirect measurement: � � �� δ ( k ) := K · Full Screen x ( k ) 1 , . . . , x ( k ) � y − f ; n Close • Compute ∆ by applying the bisection method to solve Quit the Maximum Likelihood equation.
Recommend
More recommend