on the mixing advantage
play

On the mixing advantage Kais Hamza Monash University ANZAPW, July - PowerPoint PPT Presentation

On the mixing advantage Kais Hamza Monash University ANZAPW, July 2013, University of Queensland Joint work with Aidan Sudbury, Peter Jagers & Daniel Tokarev Kais Hamza On the mixing advantage Introduction Kais Hamza On the mixing


  1. On the mixing advantage Kais Hamza Monash University ANZAPW, July 2013, University of Queensland Joint work with Aidan Sudbury, Peter Jagers & Daniel Tokarev Kais Hamza On the mixing advantage

  2. Introduction Kais Hamza On the mixing advantage

  3. Introduction ◮ X j i , X i are the lifetime of an individual/unit and, max( X 1 i , . . . , X n i ) and max( X 1 , X 2 , . . . , X n ) represent the lifetime of population/system. All random variables are assumed to be non-negative. X i , X 1 i , . . . , X n i are identically distributed. Kais Hamza On the mixing advantage

  4. Introduction ◮ X j i , X i are the lifetime of an individual/unit and, max( X 1 i , . . . , X n i ) and max( X 1 , X 2 , . . . , X n ) represent the lifetime of population/system. All random variables are assumed to be non-negative. X i , X 1 i , . . . , X n i are identically distributed. ◮ X 1 X 1 X 2 X n M 1 = E [max( X 1 1 , . . . , X n . . . → 1 )] 1 1 1 Kais Hamza On the mixing advantage

  5. Introduction ◮ X j i , X i are the lifetime of an individual/unit and, max( X 1 i , . . . , X n i ) and max( X 1 , X 2 , . . . , X n ) represent the lifetime of population/system. All random variables are assumed to be non-negative. X i , X 1 i , . . . , X n i are identically distributed. ◮ X 1 X 1 X 2 X n M 1 = E [max( X 1 1 , . . . , X n . . . → 1 )] 1 1 1 X 1 X 2 M 2 = E [max( X 1 X n 2 , . . . , X n X 2 . . . → 2 )] 2 2 2 Kais Hamza On the mixing advantage

  6. Introduction ◮ X j i , X i are the lifetime of an individual/unit and, max( X 1 i , . . . , X n i ) and max( X 1 , X 2 , . . . , X n ) represent the lifetime of population/system. All random variables are assumed to be non-negative. X i , X 1 i , . . . , X n i are identically distributed. ◮ X 1 X 1 X 2 X n M 1 = E [max( X 1 1 , . . . , X n . . . → 1 )] 1 1 1 X 1 X 2 M 2 = E [max( X 1 X n 2 , . . . , X n X 2 . . . → 2 )] 2 2 2 . . . Kais Hamza On the mixing advantage

  7. Introduction ◮ X j i , X i are the lifetime of an individual/unit and, max( X 1 i , . . . , X n i ) and max( X 1 , X 2 , . . . , X n ) represent the lifetime of population/system. All random variables are assumed to be non-negative. X i , X 1 i , . . . , X n i are identically distributed. ◮ X 1 X 1 X 2 X n M 1 = E [max( X 1 1 , . . . , X n . . . → 1 )] 1 1 1 X 1 X 2 M 2 = E [max( X 1 X n 2 , . . . , X n X 2 . . . → 2 )] 2 2 2 . . . X 1 X 2 X n M n = E [max( X 1 n , . . . , X n . . . → n )] X n n n n Kais Hamza On the mixing advantage

  8. Introduction ◮ X j i , X i are the lifetime of an individual/unit and, max( X 1 i , . . . , X n i ) and max( X 1 , X 2 , . . . , X n ) represent the lifetime of population/system. All random variables are assumed to be non-negative. X i , X 1 i , . . . , X n i are identically distributed. ◮ X 1 X 1 X 2 X n M 1 = E [max( X 1 1 , . . . , X n . . . → 1 )] 1 1 1 X 1 X 2 M 2 = E [max( X 1 X n 2 , . . . , X n X 2 . . . → 2 )] 2 2 2 . . . X 1 X 2 X n M n = E [max( X 1 n , . . . , X n . . . → n )] X n n n n ◮ We wish to compare E [max( X 1 , X 2 , . . . , X n )] to M i = E [max( X 1 i , . . . , X n i )], i = 1 , . . . , n . Kais Hamza On the mixing advantage

  9. Introduction A B Reliability – Warm Duplication Method Kais Hamza On the mixing advantage

  10. Introduction A B A A Reliability – Warm Duplication Method Kais Hamza On the mixing advantage

  11. Introduction A B A B A B Reliability – Warm Duplication Method Kais Hamza On the mixing advantage

  12. Introduction A B A A B A B B Reliability – Warm Duplication Method Kais Hamza On the mixing advantage

  13. Introduction Kais Hamza On the mixing advantage

  14. Introduction ◮ Question: Is it better to mix or go with a single type? Kais Hamza On the mixing advantage

  15. Introduction ◮ Question: Is it better to mix or go with a single type? ◮ Obviously, if one type dominates all others, then choosing that type only is optimum. Kais Hamza On the mixing advantage

  16. Introduction ◮ Question: Is it better to mix or go with a single type? ◮ Obviously, if one type dominates all others, then choosing that type only is optimum. ◮ Question: What if all types are similar (no dominant type); i.e. E [max( X 1 1 , . . . , X n 1 )] = . . . = E [max( X 1 n , . . . , X n n )]? Kais Hamza On the mixing advantage

  17. Introduction Kais Hamza On the mixing advantage

  18. Introduction Assume all random variables are independent. Kais Hamza On the mixing advantage

  19. Introduction Assume all random variables are independent. ◮ It is easy to show (direct consequence of the arithmetic-geometric mean inequality) that E [max( X 1 , . . . , X n )] ≥ E [max( X 1 i , . . . , X n i )] . In fact, the same arithmetic-geometric mean inequality shows that n E [max( X 1 , . . . , X n )] ≥ 1 � E [max( X 1 i , . . . , X n i )] . n i =1 In other words, mixing is advantageous. Kais Hamza On the mixing advantage

  20. Introduction Assume all random variables are independent. ◮ It is easy to show (direct consequence of the arithmetic-geometric mean inequality) that E [max( X 1 , . . . , X n )] ≥ E [max( X 1 i , . . . , X n i )] . In fact, the same arithmetic-geometric mean inequality shows that n E [max( X 1 , . . . , X n )] ≥ 1 � E [max( X 1 i , . . . , X n i )] . n i =1 In other words, mixing is advantageous. ◮ If M i = E [max( X 1 i , . . . , X n i )], i = 1 , . . . , n , we call mixing factor θ = E [max( X 1 , . . . , X n )] max( M 1 , . . . , M n ) . We show that when M i = M , θ ≤ 2 − 1 / n < 2. Kais Hamza On the mixing advantage

  21. Existing literature Kais Hamza On the mixing advantage

  22. Existing literature ◮ An extensive literature exists on E [max( X 1 , . . . , X n )] in the iid case – see David and Nagaraja (2003). However, very little work exists for the non-identically distributed case. Kais Hamza On the mixing advantage

  23. Existing literature ◮ An extensive literature exists on E [max( X 1 , . . . , X n )] in the iid case – see David and Nagaraja (2003). However, very little work exists for the non-identically distributed case. ◮ Arnold and Groeneveld (1979) obtain upper and lower bounds on E [max( X 1 , . . . , X n )] even when X 1 , . . . , X n are not independent and not identically distributed, but in terms of E [ X 1 ] and var( X i ), not M 1 , . . . , M n . This generalises Hartley and David (1954) and Gumbel (1954) who deal with the iid case. Kais Hamza On the mixing advantage

  24. Existing literature Kais Hamza On the mixing advantage

  25. Existing literature ◮ Sen (1970) shows that max( X 1 , . . . , X n ) stochastically dominates max( Y 1 , . . . , Y n ), where Y 1 , . . . , Y n are iid equally-weighted probability mixtures of X 1 , . . . , X n : P (max( X 1 , . . . , X n ) ≤ z ) ≤ P (max( Y 1 , . . . , Y n ) ≤ z ) . In particular n 1 � E [max( X 1 i , . . . , X n i )] n i =1 E [max( Y 1 , . . . , Y n )] ≤ E [max( X 1 , . . . , X n )] . ≤ However, E [max( Y 1 , . . . , Y n )] cannot be expressed in terms of M 1 , . . . , M n . Kais Hamza On the mixing advantage

  26. Unbounded independent case Kais Hamza On the mixing advantage

  27. Unbounded independent case Theorem (H., Jagers, Sudbury & Tokarev, 2009) If X 1 , . . . , X n are independent random variables with the property that E [max( X 1 i , . . . , X n i )] = M i , i = 1 , 2 , ..., n , then n 1 � ≤ E [max( X 1 , . . . , X n )] M i n i =1 n 1 M i + n − 1 � ≤ max( M 1 , . . . , M n ) . n n i =1 In particular, if M i = M , i = 1 , ..., n , M ≤ E [max( X 1 , . . . , X n )] ≤ (2 − 1 / n ) M . Kais Hamza On the mixing advantage

  28. Unbounded independent case Theorem (H., Jagers, Sudbury & Tokarev, 2009) If X 1 , . . . , X n are independent random variables with the property that E [max( X 1 i , . . . , X n i )] = M i , i = 1 , 2 , ..., n , then n 1 � ≤ E [max( X 1 , . . . , X n )] M i n i =1 n 1 M i + n − 1 � ≤ max( M 1 , . . . , M n ) . n n i =1 In particular, if M i = M , i = 1 , ..., n , M ≤ E [max( X 1 , . . . , X n )] ≤ (2 − 1 / n ) M . The upper bound is obtained by letting some of the random variables be concentrated on 0 and x and letting x → ∞ . Kais Hamza On the mixing advantage

  29. Bounded independent case Kais Hamza On the mixing advantage

  30. Bounded independent case Theorem (H. & Sudbury, 2011) If a set of random variables X 1 , . . . , X n are independent, concen- trated on [0 , b ] and s.t. E [max( X 1 i , . . . , X n i )] = M i , i = 1 , . . . , n , then, putting M n = max( M 1 , . . . , M n ), n � ( b − M i ) 1 / n b − ≤ E [max( X 1 , . . . , X n )] i =1 n − 1 � (1 − M i / b ) 1 / n . ≤ b − ( b − M n ) i =1 Kais Hamza On the mixing advantage

  31. Bounded independent case Kais Hamza On the mixing advantage

  32. Bounded independent case Corollary In the case M i = M , i = 1 , . . . , n we have M ≤ E [max( X 1 , . . . , X n )] ≤ b − b (1 − M / b ) 2 − 1 / n where the latter expression approaches (2 − 1 / n ) M as b → + ∞ and M (2 − M / b ) as n → + ∞ . Kais Hamza On the mixing advantage

  33. Bounded independent case Kais Hamza On the mixing advantage

  34. Bounded independent case Changing X into b − X transforms maxima into minima immediately yielding the following result. Kais Hamza On the mixing advantage

Recommend


More recommend