Need for Correlation Need to Take into . . . Expert Uncertainty . . . What Is Known Estimating Correlation Estimation Is Usually . . . under Interval and Fuzzy Hierarchical . . . Main Result Uncertainty: Case of Reducing Minimum to . . . Algorithm Hierarchical Estimation Home Page Ali Jalal-Kamali Title Page ◭◭ ◮◮ Department of Computer Science University of Texas at El Paso ◭ ◮ El Paso, TX 79968, USA ajalalkamali@miners.utep.edu Page 1 of 22 Go Back Full Screen Close Quit
Need for Correlation Need to Take into . . . 1. Need for Correlation Expert Uncertainty . . . • In practice, it is often desirable to know which quanti- What Is Known ties x , y are independent and which are correlated. Estimation Is Usually . . . Hierarchical . . . • To estimate the correlation ρ between x and y , we mea- Main Result sure the values x i and y i in different situations i . Reducing Minimum to . . . C • ρ is then estimated as the ratio ρ = √ V x · � , where Algorithm V y Home Page the covariance C and variances V x , V y are: Title Page n n � � = 1 ( x i − E x ) · ( y i − E y ) = 1 def n · n · x i · y i − E x · E y , C ◭◭ ◮◮ i =1 i =1 ◭ ◮ � n � n = 1 = 1 def def ( x i − E x ) 2 , ( y i − E y ) 2 , and Page 2 of 22 V x n · V y n · i =1 i =1 Go Back n n � � = 1 = 1 def def Full Screen E x n · x i , E y n · y i . i =1 i =1 Close Quit
Need for Correlation Need to Take into . . . 2. Need to Take into Account Interval Uncertainty Expert Uncertainty . . . • The values x i and y i used to estimate correlation come What Is Known from measurements. Estimation Is Usually . . . Hierarchical . . . • Measurements are never absolutely accurate. Main Result • The measurement results � x i and � y i are, in general, dif- Reducing Minimum to . . . ferent from the actual (unknown) values x i and y i . Algorithm Home Page • Hence, the value � ρ based on � x i and � y i is, in general, different from the ideal value ρ based on x i and y i . Title Page • It is therefore desirable to determine how accurate is ◭◭ ◮◮ the resulting estimate. ◭ ◮ • Sometimes, we know the probabilities of different val- Page 3 of 22 def def ues of ∆ x i = � x i − x i and ∆ y i = � y i − y i . Go Back • However, in many cases, we do not know these proba- Full Screen bilities. Close Quit
Need for Correlation Need to Take into . . . 3. Interval Uncertainty (cont-d) Expert Uncertainty . . . • In many cases, we do not know the probabilities of What Is Known different values ∆ x i and ∆ y i . Estimation Is Usually . . . Hierarchical . . . • We only know the upper bounds ∆ xi and ∆ yi on the Main Result corresponding measurement errors: Reducing Minimum to . . . | ∆ x i | ≤ ∆ xi and | ∆ y i | ≤ ∆ yi . Algorithm Home Page • In this case, the only info that we have about x i and y i is that they belong to the intervals Title Page [ x i , x i ] = [ � x i − ∆ xi , � x i +∆ xi ] and [ y i , y i ] = [ � y i − ∆ yi , � y i +∆ yi ] . ◭◭ ◮◮ • Different values x i ∈ [ x i , x i ] and y i ∈ [ y i , y i ] lead, in ◭ ◮ general, to different values of the correlation. Page 4 of 22 • It is therefore desirable to find the range [ ρ, ρ ] of all Go Back possible values of the correlation ρ : Full Screen { ρ ( x 1 , . . . , x n , y 1 , . . . , y n ) : x i ∈ [ x i , x i ] , y i ∈ [ y i , y i ] } . Close Quit
Need for Correlation Need to Take into . . . 4. Expert Uncertainty Reduced to the Interval Expert Uncertainty . . . Uncertainty What Is Known • An expert usually describes his/her uncertainty by us- Estimation Is Usually . . . ing words from the natural language. Hierarchical . . . Main Result • To formalize this knowledge, fuzzy set theory is used, in which Reducing Minimum to . . . Algorithm – for every quantity x i , we have a fuzzy set µ i ( x i ), Home Page – which describes the expert’s knowledge about x i . Title Page • An alternative user-friendly way to represent a fuzzy ◭◭ ◮◮ def set is by using its α -cuts x i ( α ) = { x i : µ ( x i ) ≥ α } . ◭ ◮ • It is known that for any function y = f ( x 1 , . . . , x n ), Page 5 of 22 the α -cut of y is equal to y ( α ) = { f ( x 1 , . . . , x n ) : x 1 ∈ x 1 ( α ) , . . . , x n ∈ x n ( α ) } . Go Back Full Screen • So, estimating ρ under fuzzy uncertainty can be re- duced to interval uncertainty. Close Quit
Need for Correlation Need to Take into . . . 5. What Is Known Expert Uncertainty . . . • Estimating correlation under interval uncertainty is, in What Is Known general, NP-hard. Estimation Is Usually . . . Hierarchical . . . • Unless P=NP, there is no feasible algorithm for com- Main Result puting the range of correlation. Reducing Minimum to . . . • It is known that: Algorithm – while we cannot have an efficient algorithm for com- Home Page puting both bounds ρ and ρ , Title Page – we can effectively compute (at least) one of the ◭◭ ◮◮ bounds. ◭ ◮ • We can effectively compute ρ when ρ > 0 and we can Page 6 of 22 effectively compute ρ when ρ < 0. Go Back • Eff. comp. are also possible for weighted correlation, � � n n Full Screen w/ E x = w i · x i , etc., for some w i ≥ 0 s.t. w i = 1 . i =1 i =1 Close Quit
Need for Correlation Need to Take into . . . 6. Estimation Is Usually Hierarchical Expert Uncertainty . . . • In some practical situations, e.g., when processing cen- What Is Known sus results, we do not process all of the data at once: Estimation Is Usually . . . Hierarchical . . . – we first combine the data by county, Main Result – then combine county data into state-wide data, etc. Reducing Minimum to . . . • In general, in each stage, the data points are divided Algorithm into groups I 1 , . . . , I m ; e.g., the overall average E x is: Home Page � n � m � � m E x = 1 x i = 1 Title Page n · n · x i = p j · E xj , ◭◭ ◮◮ i =1 j =1 i ∈ I j j =1 � where E xj = 1 = n j ◭ ◮ def · x i and p j n . n j Page 7 of 22 i ∈ I j • We compute E xj for each group and then compute E x . Go Back � m Full Screen • Similarly, E y = p j · E yj . j =1 Close Quit
Need for Correlation Need to Take into . . . 7. Estimation Is Usually Hierarchical (cont-d) Expert Uncertainty . . . � � m m What Is Known • Reminder: E x = p j · E xj and E y = p j · E yj . Estimation Is Usually . . . j =1 j =1 Hierarchical . . . � m � m p j · ( E xj − E x ) 2 + • Similarly, V x = p j · V xj , where Main Result j =1 j =1 V xj are x -variances within the j -th group. Reducing Minimum to . . . Algorithm � � m m p j · ( E yj − E y ) 2 + • Also, V y = p j · V yj , where V xj Home Page j =1 j =1 are y -variances within the j -th group. Title Page � � m m ◭◭ ◮◮ • Cov. C = p j · ( E xj − E x ) · ( E yj − E y ) + p j · C j , ◭ ◮ j =1 j =1 where C j is the covariance over the j -th group. Page 8 of 22 • Finally, we compute correlation ρ as Go Back C ρ = √ V x · � . Full Screen V y Close Quit
Need for Correlation Need to Take into . . . 8. Hierarchical Estimation Under Interval Uncer- Expert Uncertainty . . . tainty What Is Known • Ideally, for each group j , we compute the values p j , Estimation Is Usually . . . E xj , E yj , V xj , V yj , and C j . Hierarchical . . . Main Result • Based on these values, we compute E , V x , V y , C , ρ . Reducing Minimum to . . . • In practice, we often only know the values x i and y i Algorithm with interval uncertainty. Home Page • As a result, for each group j , we only know the interval Title Page of possible values for each characteristic. ◭◭ ◮◮ • That means that we only know the intervals E xj , E xj , ◭ ◮ E yj , V xj , V yj , and C j . Page 9 of 22 • Different values from these intervals lead to different ρ . Go Back • It is desirable to find the range [ ρ, ρ ]. Full Screen • We show that for hierarchical estimation, it is feasible Close to compute at least one of the endpoints of [ ρ, ρ ]. Quit
Need for Correlation Need to Take into . . . 9. Main Result Expert Uncertainty . . . • There exists a polynomial-time algorithm that: What Is Known Estimation Is Usually . . . – given intervals E xj , E xj , E yj , V xj , V yj , and C j , Hierarchical . . . – computes (at least) one of the endpoint of the in- Main Result terval [ ρ, ρ ] of possible values of the correlation ρ . Reducing Minimum to . . . • Specifically, in the case of a non-degenerate interval Algorithm [ ρ, ρ ]: Home Page – when ρ ≤ 0, we compute the lower endpoint ρ ; Title Page – when 0 ≤ ρ , we compute the upper endpoint ρ ; ◭◭ ◮◮ – in all remaining cases, we compute both endpoints ◭ ◮ ρ and ρ . Page 10 of 22 Go Back Full Screen Close Quit
Recommend
More recommend