cs 473 algorithms
play

CS 473: Algorithms Chandra Chekuri Ruta Mehta University of - PowerPoint PPT Presentation

CS 473: Algorithms Chandra Chekuri Ruta Mehta University of Illinois, Urbana-Champaign Fall 2016 Chandra & Ruta (UIUC) CS473 1 Fall 2016 1 / 37 CS 473: Algorithms, Fall 2016 Inequalities & QuickSort w.h.p. Lecture 8 September


  1. A Slick Analysis of QuickSort Continued... So far we know 2 Pr[R ij ] = j − i+1 . n − 1 n n − 1 n − i+1 1 1 � � � � � � Q(A) = 2 j − i + 1 ≤ 2 E ∆ i=1 i < j i=1 ∆=2 n − 1 � � ≤ 2 (H n − i+1 − 1) ≤ 2 H n i=1 1 ≤ i < n ≤ 2nH n = O(n log n) Chandra & Ruta (UIUC) CS473 13 Fall 2016 13 / 37

  2. Part II Inequalities Chandra & Ruta (UIUC) CS473 14 Fall 2016 14 / 37

  3. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  4. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  5. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  6. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  7. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  8. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  9. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  10. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  11. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  12. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  13. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  14. Massive randomness.. Is not that random. Consider flipping a fair coin n times independently, head gives 1 , tail � n � gives zero. How many 1 s? Binomial distribution: k w.p. 1 / 2 n . k Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 37

  15. Massive randomness.. Is not that random. This is known as concentration of mass . This is a very special case of the law of large numbers . Chandra & Ruta (UIUC) CS473 16 Fall 2016 16 / 37

  16. Side note... Law of large numbers (weakest form)... Informal statement of law of large numbers For n large enough, the middle portion of the binomial distribution looks like (converges to) the normal/Gaussian distribution. Chandra & Ruta (UIUC) CS473 17 Fall 2016 17 / 37

  17. Massive randomness.. Is not that random. Intuitive conclusion Randomized algorithm are unpredictable in the tactical level, but very predictable in the strategic level. Chandra & Ruta (UIUC) CS473 18 Fall 2016 18 / 37

  18. Massive randomness.. Is not that random. Intuitive conclusion Randomized algorithm are unpredictable in the tactical level, but very predictable in the strategic level. Use of well known inequalities in analysis. Chandra & Ruta (UIUC) CS473 18 Fall 2016 18 / 37

  19. Randomized QuickSort : A possible analysis Analysis Random variable Q = #comparisons made by randomized QuickSort on an array of n elements. Chandra & Ruta (UIUC) CS473 19 Fall 2016 19 / 37

  20. Randomized QuickSort : A possible analysis Analysis Random variable Q = #comparisons made by randomized QuickSort on an array of n elements. Suppose Pr[Q ≥ 10nlgn] ≤ c . Also we know that Q ≤ n 2 . Chandra & Ruta (UIUC) CS473 19 Fall 2016 19 / 37

  21. Randomized QuickSort : A possible analysis Analysis Random variable Q = #comparisons made by randomized QuickSort on an array of n elements. Suppose Pr[Q ≥ 10nlgn] ≤ c . Also we know that Q ≤ n 2 . E[Q] ≤ 10n log n + (n 2 − 10n log n)c . Chandra & Ruta (UIUC) CS473 19 Fall 2016 19 / 37

  22. Randomized QuickSort : A possible analysis Analysis Random variable Q = #comparisons made by randomized QuickSort on an array of n elements. Suppose Pr[Q ≥ 10nlgn] ≤ c . Also we know that Q ≤ n 2 . E[Q] ≤ 10n log n + (n 2 − 10n log n)c . Question: How to find c , or in other words bound Pr[Q ≥ 10n log n] ? Chandra & Ruta (UIUC) CS473 19 Fall 2016 19 / 37

  23. Markov’s Inequality Markov’s inequality Let X be a non-negative random variable over a probability space (Ω , Pr) . For any a > 0 , Pr[X ≥ a] ≤ E[X] a Chandra & Ruta (UIUC) CS473 20 Fall 2016 20 / 37

  24. Markov’s Inequality Markov’s inequality Let X be a non-negative random variable over a probability space (Ω , Pr) . For any a > 0 , Pr[X ≥ a] ≤ E[X] a Proof: E[X] = � ω ∈ Ω X( ω ) Pr[ ω ] ≥ � ω ∈ Ω , X( ω ) ≥ a X( ω ) Pr[ ω ] ≥ a � ω ∈ Ω , X( ω ) ≥ a Pr[ ω ] = a Pr[X ≥ a] Chandra & Ruta (UIUC) CS473 20 Fall 2016 20 / 37

  25. Markov’s Inequality: Proof by Picture Chandra & Ruta (UIUC) CS473 21 Fall 2016 21 / 37

  26. Example: Balls in a bin n black and white balls in a bin. We wish to estimate the fraction of black balls. Lets say it is p ∗ . Chandra & Ruta (UIUC) CS473 22 Fall 2016 22 / 37

  27. Example: Balls in a bin n black and white balls in a bin. We wish to estimate the fraction of black balls. Lets say it is p ∗ . An approach: Draw k balls with replacement. If B are black then output p = B k . Chandra & Ruta (UIUC) CS473 22 Fall 2016 22 / 37

  28. Example: Balls in a bin n black and white balls in a bin. We wish to estimate the fraction of black balls. Lets say it is p ∗ . An approach: Draw k balls with replacement. If B are black then output p = B k . Question How large k needs to be before our estimated value p is close to p ∗ ? Chandra & Ruta (UIUC) CS473 22 Fall 2016 22 / 37

  29. Example: Balls in a bin A rough estimate through Markov’s inequality. Lemma For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Chandra & Ruta (UIUC) CS473 23 Fall 2016 23 / 37

  30. Example: Balls in a bin A rough estimate through Markov’s inequality. Lemma For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Proof. For each 1 ≤ i ≤ k define random variable X i , which is 1 if i th ball is black, otherwise 0 . E[X i ] = Pr[X i = 1] = p ∗ . Chandra & Ruta (UIUC) CS473 23 Fall 2016 23 / 37

  31. Example: Balls in a bin A rough estimate through Markov’s inequality. Lemma For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Proof. For each 1 ≤ i ≤ k define random variable X i , which is 1 if i th ball is black, otherwise 0 . E[X i ] = Pr[X i = 1] = p ∗ . B = � k i=1 X i , then E[B] = � k i=1 E[X i ] = kp ∗ . p = B / k . Chandra & Ruta (UIUC) CS473 23 Fall 2016 23 / 37

  32. Example: Balls in a bin A rough estimate through Markov’s inequality. Lemma For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Proof. For each 1 ≤ i ≤ k define random variable X i , which is 1 if i th ball is black, otherwise 0 . E[X i ] = Pr[X i = 1] = p ∗ . B = � k i=1 X i , then E[B] = � k i=1 E[X i ] = kp ∗ . p = B / k . Markov’s inequality gives, Pr[p ≥ 2p ∗ ] = � B = Pr[B ≥ 2kp ∗ ] = Pr[B ≥ 2 E[B]] ≤ 1 � k ≥ 2p ∗ Pr 2 Chandra & Ruta (UIUC) CS473 23 Fall 2016 23 / 37

  33. Chebyshev’s Inequality: Variance Variance Given a random variable X over probability space (Ω , Pr) , variance of X is the measure of how much does it deviate from its mean value. − E[X] 2 � (X − E[X]) 2 � � X 2 � Formally, Var(X) = E = E Chandra & Ruta (UIUC) CS473 24 Fall 2016 24 / 37

  34. Chebyshev’s Inequality: Variance Variance Given a random variable X over probability space (Ω , Pr) , variance of X is the measure of how much does it deviate from its mean value. − E[X] 2 � (X − E[X]) 2 � � X 2 � Formally, Var(X) = E = E Intuitive Derivation Define Y = (X − E[X]) 2 = X 2 − 2X E[X] + E[X] 2 . Chandra & Ruta (UIUC) CS473 24 Fall 2016 24 / 37

  35. Chebyshev’s Inequality: Variance Variance Given a random variable X over probability space (Ω , Pr) , variance of X is the measure of how much does it deviate from its mean value. − E[X] 2 � (X − E[X]) 2 � � X 2 � Formally, Var(X) = E = E Intuitive Derivation Define Y = (X − E[X]) 2 = X 2 − 2X E[X] + E[X] 2 . Var(X) = E[Y] − 2 E[X] E[X] + E[X] 2 � X 2 � = E − E[X] 2 � X 2 � = E Chandra & Ruta (UIUC) CS473 24 Fall 2016 24 / 37

  36. Chebyshev’s Inequality: Variance Independence Random variables X and Y are called mutually independent if ∀ x , y ∈ R , Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] Lemma If X and Y are independent random variables then Var(X + Y) = Var(X) + Var(Y) . Chandra & Ruta (UIUC) CS473 25 Fall 2016 25 / 37

  37. Chebyshev’s Inequality: Variance Independence Random variables X and Y are called mutually independent if ∀ x , y ∈ R , Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] Lemma If X and Y are independent random variables then Var(X + Y) = Var(X) + Var(Y) . Lemma If X and Y are mutually independent, then E[XY] = E[X] E[Y] . Chandra & Ruta (UIUC) CS473 25 Fall 2016 25 / 37

  38. Chebyshev’s Inequality Chebyshev’s Inequality Given a ≥ 0 , Pr[ | X − E[X] | ≥ a] ≤ Var(X) a 2 Chandra & Ruta (UIUC) CS473 26 Fall 2016 26 / 37

  39. Chebyshev’s Inequality Chebyshev’s Inequality Given a ≥ 0 , Pr[ | X − E[X] | ≥ a] ≤ Var(X) a 2 Proof. Y = (X − E[X]) 2 is a non-negative random variable. Apply Markov’s Inequality to Y for a 2 . (X − E[X]) 2 ≥ a 2 � � Y ≥ a 2 � � Pr ≤ E [Y] / a 2 ⇔ Pr ≤ Var(X) / a 2 ⇔ Pr[ | X − E[X] | ≥ a] ≤ Var(X) / a 2 Chandra & Ruta (UIUC) CS473 26 Fall 2016 26 / 37

  40. Chebyshev’s Inequality Chebyshev’s Inequality Given a ≥ 0 , Pr[ | X − E[X] | ≥ a] ≤ Var(X) a 2 Proof. Y = (X − E[X]) 2 is a non-negative random variable. Apply Markov’s Inequality to Y for a 2 . (X − E[X]) 2 ≥ a 2 � � Y ≥ a 2 � � Pr ≤ E [Y] / a 2 ⇔ Pr ≤ Var(X) / a 2 ⇔ Pr[ | X − E[X] | ≥ a] ≤ Var(X) / a 2 Pr[X ≤ E[X] − a] ≤ Var(X) / a 2 AND Pr[X ≥ E[X] + a] ≤ Var(X) / a 2 Chandra & Ruta (UIUC) CS473 26 Fall 2016 26 / 37

  41. Example:Balls in a bin (contd) Lemma For 0 < ǫ < 1 and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 1 / k ǫ 2 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . Chandra & Ruta (UIUC) CS473 27 Fall 2016 27 / 37

  42. Example:Balls in a bin (contd) Lemma For 0 < ǫ < 1 and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 1 / k ǫ 2 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . − E[X i ] 2 = E[X i ] − E[X i ] 2 = p ∗ (1 − p ∗ ) � X 2 � Var(X i ) = E i Chandra & Ruta (UIUC) CS473 27 Fall 2016 27 / 37

  43. Example:Balls in a bin (contd) Lemma For 0 < ǫ < 1 and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 1 / k ǫ 2 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . − E[X i ] 2 = E[X i ] − E[X i ] 2 = p ∗ (1 − p ∗ ) � X 2 � Var(X i ) = E i Var(B) = � i Var(X i ) = kp ∗ (1 − p ∗ ) (Exercise) Chandra & Ruta (UIUC) CS473 27 Fall 2016 27 / 37

  44. Example:Balls in a bin (contd) Lemma For 0 < ǫ < 1 and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 1 / k ǫ 2 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . − E[X i ] 2 = E[X i ] − E[X i ] 2 = p ∗ (1 − p ∗ ) � X 2 � Var(X i ) = E i Var(B) = � i Var(X i ) = kp ∗ (1 − p ∗ ) (Exercise) Pr[ | B / k − p ∗ | ≥ ǫ ] Pr[ | B − kp ∗ | ≥ k ǫ ] = Var(B) / k 2 ǫ 2 = kp ∗ (1 − p ∗ ) / k 2 ǫ 2 (Chebyshev) ≤ < 1 / k ǫ 2 Chandra & Ruta (UIUC) CS473 27 Fall 2016 27 / 37

  45. Chernoff Bound Lemma Let X 1 , . . . , X k be k independent random variables such that, for each i ∈ [1 , k] , X i equals 1 with probability p i , and 0 with probability (1 − p i ) . Chandra & Ruta (UIUC) CS473 28 Fall 2016 28 / 37

  46. Chernoff Bound Lemma Let X 1 , . . . , X k be k independent random variables such that, for each i ∈ [1 , k] , X i equals 1 with probability p i , and 0 with probability (1 − p i ) . Let X = � k i=1 X i and µ = E[X] = � i p i . For any 0 < δ < 1 , it holds that: − δ 2 µ Pr[ | X − µ | ≥ δµ ] ≤ 2e 3 Chandra & Ruta (UIUC) CS473 28 Fall 2016 28 / 37

  47. Chernoff Bound Lemma Let X 1 , . . . , X k be k independent random variables such that, for each i ∈ [1 , k] , X i equals 1 with probability p i , and 0 with probability (1 − p i ) . Let X = � k i=1 X i and µ = E[X] = � i p i . For any 0 < δ < 1 , it holds that: − δ 2 µ Pr[ | X − µ | ≥ δµ ] ≤ 2e 3 − δ 2 µ − δ 2 µ Pr[X ≥ (1 + δ ) µ ] ≤ e and Pr[X ≤ (1 − δ ) µ ] ≤ e 3 2 Chandra & Ruta (UIUC) CS473 28 Fall 2016 28 / 37

  48. Chernoff Bound Lemma Let X 1 , . . . , X k be k independent random variables such that, for each i ∈ [1 , k] , X i equals 1 with probability p i , and 0 with probability (1 − p i ) . Let X = � k i=1 X i and µ = E[X] = � i p i . For any 0 < δ < 1 , it holds that: − δ 2 µ Pr[ | X − µ | ≥ δµ ] ≤ 2e 3 − δ 2 µ − δ 2 µ Pr[X ≥ (1 + δ ) µ ] ≤ e and Pr[X ≤ (1 − δ ) µ ] ≤ e 3 2 Proof. In notes! Chandra & Ruta (UIUC) CS473 28 Fall 2016 28 / 37

  49. Example:Balls in a bin (Contd.) Lemma For any 0 < ǫ < 1 , and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 2e − k ǫ 2 3 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . Chandra & Ruta (UIUC) CS473 29 Fall 2016 29 / 37

  50. Example:Balls in a bin (Contd.) Lemma For any 0 < ǫ < 1 , and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 2e − k ǫ 2 3 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . | B Pr[ | p − p ∗ | ≥ ǫ ] � k − p ∗ | ≥ ǫ � = Pr Pr[ | B − kp ∗ | ≥ k ǫ ] = � p ∗ )kp ∗ � | B − kp ∗ | ≥ ( ǫ = Pr Chandra & Ruta (UIUC) CS473 29 Fall 2016 29 / 37

  51. Example:Balls in a bin (Contd.) Lemma For any 0 < ǫ < 1 , and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 2e − k ǫ 2 3 . Proof. Recall: X i is 1 is i th ball is black, else 0 , B = � k i=1 X i . E[X i ] = p ∗ , E[B] = kp ∗ . p = B / k . | B Pr[ | p − p ∗ | ≥ ǫ ] � k − p ∗ | ≥ ǫ � = Pr Pr[ | B − kp ∗ | ≥ k ǫ ] = � p ∗ )kp ∗ � | B − kp ∗ | ≥ ( ǫ = Pr ǫ 2 = 2e − k ǫ 2 − 3p ∗ 2 kp ∗ (Chernoff) ≤ 2e 3p ∗ 2e − k ǫ 2 (p ∗ ≤ 1) ≤ 3 Chandra & Ruta (UIUC) CS473 29 Fall 2016 29 / 37

  52. Example Summary The problem was to estimate the fraction of black balls p ∗ in a bin filled with white and black balls. Our estimate was p = B k instead, where out of k draws (with replacement) B balls turns out black. Markov’s Inequality For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Chandra & Ruta (UIUC) CS473 30 Fall 2016 30 / 37

  53. Example Summary The problem was to estimate the fraction of black balls p ∗ in a bin filled with white and black balls. Our estimate was p = B k instead, where out of k draws (with replacement) B balls turns out black. Markov’s Inequality For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Chebyshev’s Inequality For any 0 < ǫ < 1 , and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 1 / k ǫ 2 . Chandra & Ruta (UIUC) CS473 30 Fall 2016 30 / 37

  54. Example Summary The problem was to estimate the fraction of black balls p ∗ in a bin filled with white and black balls. Our estimate was p = B k instead, where out of k draws (with replacement) B balls turns out black. Markov’s Inequality For any k ≥ 1 , Pr[p ≥ 2p ∗ ] ≤ 1 2 Chebyshev’s Inequality For any 0 < ǫ < 1 , and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 1 / k ǫ 2 . Chernoff Bound For any 0 < ǫ < 1 , and k ≥ 1 , Pr[ | p − p ∗ | > ǫ ] ≤ 2e − k ǫ 2 3 . Chandra & Ruta (UIUC) CS473 30 Fall 2016 30 / 37

  55. Part III Randomized QuickSort (Contd.) Chandra & Ruta (UIUC) CS473 31 Fall 2016 31 / 37

  56. Randomized QuickSort : Recall Input: Array A of n numbers. Output: Numbers in sorted order. Randomized QuickSort Pick a pivot element uniformly at random from A . 1 Split array into 3 subarrays: those smaller than pivot, those 2 larger than pivot, and the pivot itself. Recursively sort the subarrays, and concatenate them. 3 Chandra & Ruta (UIUC) CS473 32 Fall 2016 32 / 37

  57. Randomized QuickSort : Recall Input: Array A of n numbers. Output: Numbers in sorted order. Randomized QuickSort Pick a pivot element uniformly at random from A . 1 Split array into 3 subarrays: those smaller than pivot, those 2 larger than pivot, and the pivot itself. Recursively sort the subarrays, and concatenate them. 3 Note: On every input randomized QuickSort takes O(n log n) time in expectation. On every input it may take Ω(n 2 ) time with some small probability. Chandra & Ruta (UIUC) CS473 32 Fall 2016 32 / 37

  58. Randomized QuickSort : Recall Input: Array A of n numbers. Output: Numbers in sorted order. Randomized QuickSort Pick a pivot element uniformly at random from A . 1 Split array into 3 subarrays: those smaller than pivot, those 2 larger than pivot, and the pivot itself. Recursively sort the subarrays, and concatenate them. 3 Note: On every input randomized QuickSort takes O(n log n) time in expectation. On every input it may take Ω(n 2 ) time with some small probability. Question: With what probability it takes O(n log n) time? Chandra & Ruta (UIUC) CS473 32 Fall 2016 32 / 37

  59. Randomized QuickSort : High Probability Analysis Informal Statement Random variable Q(A) = # comparisons done by the algorithm. We will show that Pr[Q(A) ≤ 32n ln n] ≥ 1 − 1 / n 3 . Chandra & Ruta (UIUC) CS473 33 Fall 2016 33 / 37

  60. Randomized QuickSort : High Probability Analysis Informal Statement Random variable Q(A) = # comparisons done by the algorithm. We will show that Pr[Q(A) ≤ 32n ln n] ≥ 1 − 1 / n 3 . If n = 100 then this gives Pr[Q(A) ≤ 32n ln n] ≥ 0 . 99999 . Chandra & Ruta (UIUC) CS473 33 Fall 2016 33 / 37

  61. Randomized QuickSort : High Probability Analysis Informal Statement Random variable Q(A) = # comparisons done by the algorithm. We will show that Pr[Q(A) ≤ 32n ln n] ≥ 1 − 1 / n 3 . Outline of the proof If depth of recursion is k then Q(A) ≤ kn . Prove that depth of recursion ≤ 32 ln n with high probability. Which will imply the result. Chandra & Ruta (UIUC) CS473 34 Fall 2016 34 / 37

  62. Randomized QuickSort : High Probability Analysis Informal Statement Random variable Q(A) = # comparisons done by the algorithm. We will show that Pr[Q(A) ≤ 32n ln n] ≥ 1 − 1 / n 3 . Outline of the proof If depth of recursion is k then Q(A) ≤ kn . Prove that depth of recursion ≤ 32 ln n with high probability. Which will imply the result. Gocus on a single element. Prove that it “participates” in 1 > 32 ln n levels with probability at most 1 / n 4 . By union bound, any of the n elements participates in > 32 ln n 2 levels with probability at most Chandra & Ruta (UIUC) CS473 34 Fall 2016 34 / 37

  63. Randomized QuickSort : High Probability Analysis Informal Statement Random variable Q(A) = # comparisons done by the algorithm. We will show that Pr[Q(A) ≤ 32n ln n] ≥ 1 − 1 / n 3 . Outline of the proof If depth of recursion is k then Q(A) ≤ kn . Prove that depth of recursion ≤ 32 ln n with high probability. Which will imply the result. Gocus on a single element. Prove that it “participates” in 1 > 32 ln n levels with probability at most 1 / n 4 . By union bound, any of the n elements participates in > 32 ln n 2 levels with probability at most 1 / n 3 . Chandra & Ruta (UIUC) CS473 34 Fall 2016 34 / 37

  64. Randomized QuickSort : High Probability Analysis Informal Statement Random variable Q(A) = # comparisons done by the algorithm. We will show that Pr[Q(A) ≤ 32n ln n] ≥ 1 − 1 / n 3 . Outline of the proof If depth of recursion is k then Q(A) ≤ kn . Prove that depth of recursion ≤ 32 ln n with high probability. Which will imply the result. Gocus on a single element. Prove that it “participates” in 1 > 32 ln n levels with probability at most 1 / n 4 . By union bound, any of the n elements participates in > 32 ln n 2 levels with probability at most 1 / n 3 . Therefore, all elements participate in ≤ 32 ln n w.p. (1 − 1 / n 3 ) . 3 Chandra & Ruta (UIUC) CS473 34 Fall 2016 34 / 37

  65. Randomized QuickSort : High Probability Analysis If k levels of recursion then kn comparisons. Chandra & Ruta (UIUC) CS473 35 Fall 2016 35 / 37

  66. Randomized QuickSort : High Probability Analysis If k levels of recursion then kn comparisons. Fix an element s ∈ A . We will track it at each level. Let S i be the partition containing s at i th level. S 1 = A and S k = { s } . Chandra & Ruta (UIUC) CS473 35 Fall 2016 35 / 37

  67. Randomized QuickSort : High Probability Analysis If k levels of recursion then kn comparisons. Fix an element s ∈ A . We will track it at each level. Let S i be the partition containing s at i th level. S 1 = A and S k = { s } . We call s lucky in i th iteration, if balanced split: | S i+1 | ≤ (3 / 4) | S i | and | S i \ S i+1 | ≤ (3 / 4) | S i | . Chandra & Ruta (UIUC) CS473 35 Fall 2016 35 / 37

  68. Randomized QuickSort : High Probability Analysis If k levels of recursion then kn comparisons. Fix an element s ∈ A . We will track it at each level. Let S i be the partition containing s at i th level. S 1 = A and S k = { s } . We call s lucky in i th iteration, if balanced split: | S i+1 | ≤ (3 / 4) | S i | and | S i \ S i+1 | ≤ (3 / 4) | S i | . If ρ = #lucky rounds in first k rounds, then | S k | ≤ (3 / 4) ρ n . Chandra & Ruta (UIUC) CS473 35 Fall 2016 35 / 37

  69. Randomized QuickSort : High Probability Analysis If k levels of recursion then kn comparisons. Fix an element s ∈ A . We will track it at each level. Let S i be the partition containing s at i th level. S 1 = A and S k = { s } . We call s lucky in i th iteration, if balanced split: | S i+1 | ≤ (3 / 4) | S i | and | S i \ S i+1 | ≤ (3 / 4) | S i | . If ρ = #lucky rounds in first k rounds, then | S k | ≤ (3 / 4) ρ n . For | S k | = 1 , ρ = 4 ln n ≥ log 4 / 3 n suffices. Chandra & Ruta (UIUC) CS473 35 Fall 2016 35 / 37

  70. How may rounds before 4 ln n lucky rounds? X i = 1 if s is lucky in i th iteration. Chandra & Ruta (UIUC) CS473 36 Fall 2016 36 / 37

  71. How may rounds before 4 ln n lucky rounds? X i = 1 if s is lucky in i th iteration. Observation: X 1 , . . . , X k are independent variables. Pr[X i = 1] = 1 Why? 2 Chandra & Ruta (UIUC) CS473 36 Fall 2016 36 / 37

Recommend


More recommend