advanced algorithms coms31900
play

Advanced Algorithms COMS31900 Probability recap. Rapha el - PowerPoint PPT Presentation

Advanced Algorithms COMS31900 Probability recap. Rapha el Clifford Slides by Markus Jalsenius Randomness and probability Probability The sample space S is the set of outcomes of an experiment. Probability The sample space S is the set


  1. Random variable A random variable (r.v.) Y over sample space S is a function S → R i.e. it maps each outcome x ∈ S to some real number Y ( x ) . Pr( The probability of Y taking value y is { x ∈ S st. Y ( x ) = y } E XAMPLE sum over all values of x such that Y ( x ) = y Two coin flips. S Y What is Pr( Y = 2) ? H H 2 Pr( x ) = 1 H T 1 � Pr( Y = 2) = 4+ T H 5 x ∈{ HH , TT } T T 2

  2. Random variable A random variable (r.v.) Y over sample space S is a function S → R i.e. it maps each outcome x ∈ S to some real number Y ( x ) . Pr( The probability of Y taking value y is { x ∈ S st. Y ( x ) = y } E XAMPLE sum over all values of x such that Y ( x ) = y Two coin flips. S Y What is Pr( Y = 2) ? H H 2 Pr( x ) = 1 H T 1 � Pr( Y = 2) = 4+ T H 5 x ∈{ HH , TT } T T 2 Pr( Y = 2) = 1 2

  3. Random variable A random variable (r.v.) Y over sample space S is a function S → R i.e. it maps each outcome x ∈ S to some real number Y ( x ) . Pr( The probability of Y taking value y is { x ∈ S st. Y ( x ) = y } E XAMPLE Two coin flips. S Y H H 2 H T 1 T H 5 T T 2 Pr( Y = 2) = 1 2

  4. Random variable A random variable (r.v.) Y over sample space S is a function S → R i.e. it maps each outcome x ∈ S to some real number Y ( x ) . Pr( The probability of Y taking value y is { x ∈ S st. Y ( x ) = y } E XAMPLE Two coin flips. The expected value (the mean) of a r.v. Y , denoted E ( Y ) , is S Y H H 2 E H T 1 T H 5 T T 2 Pr( Y = 2) = 1 2

  5. Random variable A random variable (r.v.) Y over sample space S is a function S → R i.e. it maps each outcome x ∈ S to some real number Y ( x ) . Pr( The probability of Y taking value y is { x ∈ S st. Y ( x ) = y } E XAMPLE Two coin flips. The expected value (the mean) of a r.v. Y , denoted E ( Y ) , is S Y H H 2 E H T 1 T H 5 T T 2 2 · 1 1 · 1 5 · 1 = 5 � � � � � � E ( Y ) = + + 2 4 4 2 Pr( Y = 2) = 1 2

  6. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1

  7. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.)

  8. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values.

  9. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. random variable

  10. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values.

  11. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ?

  12. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 1: (without the theorem)

  13. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 1: (without the theorem) The sample space S = { (1 , 1) , (1 , 2) , (1 , 3) . . . (6 , 6) } ( 36 outcomes)

  14. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 1: (without the theorem) The sample space S = { (1 , 1) , (1 , 2) , (1 , 3) . . . (6 , 6) } ( 36 outcomes) 1 E ( Y ) = � � x ∈ S Y ( x ) · Pr( x ) = x ∈ S Y ( x ) = 36

  15. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 1: (without the theorem) The sample space S = { (1 , 1) , (1 , 2) , (1 , 3) . . . (6 , 6) } ( 36 outcomes) 1 E ( Y ) = � � x ∈ S Y ( x ) · Pr( x ) = x ∈ S Y ( x ) = 36

  16. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 1: (without the theorem) The sample space S = { (1 , 1) , (1 , 2) , (1 , 3) . . . (6 , 6) } ( 36 outcomes) 1 E ( Y ) = � � x ∈ S Y ( x ) · Pr( x ) = x ∈ S Y ( x ) = 36 1 36 (1 · 2 + 2 · 3 + 3 · 4 + · · · + 1 · 12) = 7

  17. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 1: (without the theorem) The sample space S = { (1 , 1) , (1 , 2) , (1 , 3) . . . (6 , 6) } ( 36 outcomes) 1 E ( Y ) = � � x ∈ S Y ( x ) · Pr( x ) = x ∈ S Y ( x ) = 36 1 36 (1 · 2 + 2 · 3 + 3 · 4 + · · · + 1 · 12) = 7

  18. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ?

  19. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 2: (with the theorem)

  20. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 2: (with the theorem) Let the r.v. Y 1 be the value of the first die and Y 2 the value of the second

  21. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 2: (with the theorem) Let the r.v. Y 1 be the value of the first die and Y 2 the value of the second E ( Y 1 ) = E ( Y 2 ) = 3 . 5

  22. Linearity of expectation T HEOREM (Linearity of expectation) Let Y 1 , Y 2 , . . . , Y k be k random variables. Then k k � � � � = E ( Y i ) E Y i i =1 i =1 Linearity of expectation always holds, (regardless of whether the random variables are independent or not.) E XAMPLE Roll two dice. Let the r.v. Y be the sum of the values. What is E ( Y ) ? Approach 2: (with the theorem) Let the r.v. Y 1 be the value of the first die and Y 2 the value of the second E ( Y 1 ) = E ( Y 2 ) = 3 . 5 so E ( Y ) = E ( Y 1 + Y 2 ) = E ( Y 1 ) + E ( Y 2 ) = 7

  23. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I )

  24. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = 0 · Pr( I = 0) + 1 · Pr( I = 1) = Pr( I = 1) .

  25. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) .

  26. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise).

  27. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together!

  28. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times.

  29. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll?

  30. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll? For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll is at least the value of the previous roll (and I j = 0 otherwise)

  31. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll? For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll is at least the value of the previous roll (and I j = 0 otherwise) Pr( I j = 1) = 21 7 36 = 12 . (by counting the outcomes)

  32. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll? For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll is at least the value of the previous roll (and I j = 0 otherwise) Pr( I j = 1) = 21 7 36 = 12 . (by counting the outcomes) n n n Pr( I j = 1) = ( n − 1) · 7 � � � � � = E ( I j ) = E I j 12 j =2 j =2 j =2

  33. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! Linearity of Expectation E XAMPLE Roll a die n times. Let Y 1 , Y 2 , . . . , Y k be k random variables. Then What is the expected number rolls that show a value that is at least the value of the previous roll? k k � � � � For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll = E ( Y i ) E Y i is at least the value of the previous roll (and I j = 0 otherwise) i =1 i =1 Pr( I j = 1) = 21 7 36 = 12 . (by counting the outcomes) n n n Pr( I j = 1) = ( n − 1) · 7 � � � � � = E ( I j ) = E I j 12 j =2 j =2 j =2

  34. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll? For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll is at least the value of the previous roll (and I j = 0 otherwise) Pr( I j = 1) = 21 7 36 = 12 . (by counting the outcomes) n n n Pr( I j = 1) = ( n − 1) · 7 � � � � � = E ( I j ) = E I j 12 j =2 j =2 j =2

  35. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll? For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll is at least the value of the previous roll (and I j = 0 otherwise) Pr( I j = 1) = 21 7 36 = 12 . (by counting the outcomes) n n n Pr( I j = 1) = ( n − 1) · 7 � � � � � = E ( I j ) = E I j 12 j =2 j =2 j =2

  36. Indicator random variables An indicator random variable is a r.v. that can only be 0 or 1 . (usually referred to by the letter I ) Fact: E ( I ) = Pr( I = 1) . Often an indicator r.v. I is associated with an event such that I = 1 if the event happens (and I = 0 otherwise). Indicator random variables and linearity of expectation work great together! E XAMPLE Roll a die n times. What is the expected number rolls that show a value that is at least the value of the previous roll? For j ∈ { 2 , . . . , n } , let indicator r.v. I j = 1 if the value of the j th roll is at least the value of the previous roll (and I j = 0 otherwise) Pr( I j = 1) = 21 7 36 = 12 . (by counting the outcomes) n n n Pr( I j = 1) = ( n − 1) · 7 � � � � � = E ( I j ) = E I j 12 j =2 j =2 j =2

  37. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph.

  38. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph. It then follows that at most

  39. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph. It then follows that at most 1 2 of all cars drive at least 120 mph,

  40. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph. It then follows that at most 1 2 of all cars drive at least 120 mph, . . . otherwise the mean must be higher than 60 mph. (a contradiction)

  41. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph. It then follows that at most 2 3 of all cars drive at least 90 mph, . . . otherwise the mean must be higher than 60 mph. (a contradiction)

  42. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph. It then follows that at most 2 3 of all cars drive at least 90 mph, . . . otherwise the mean must be higher than 60 mph. (a contradiction) T HEOREM (Markov’s inequality) If X is a non-negative r.v., then for all a > 0 , Pr( X ≥ a ) ≤ E ( X ) . a

  43. Markov’s inequality E XAMPLE Suppose that the average (mean) speed on the motorway is 60 mph. It then follows that at most 2 3 of all cars drive at least 90 mph, . . . otherwise the mean must be higher than 60 mph. (a contradiction) T HEOREM (Markov’s inequality) If X is a non-negative r.v., then for all a > 0 , Pr( X ≥ a ) ≤ E ( X ) . a E XAMPLE From the example above: 120 = 1 60 � Pr( speed of a random car ≥ 120 mph ) ≤ 2 , � Pr( speed of a random car ≥ 90 mph ) ≤ 60 90 = 2 3 .

  44. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat.

  45. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat?

  46. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 .

  47. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E

  48. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . Fact: E ( I ) = Pr( I = 1) . By linearity of expectation. . . E

  49. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E

  50. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E By Markov’s inequality (recall: Pr( X ≥ a ) ≤ E ( X ) ), a

  51. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E By Markov’s inequality (recall: Pr( X ≥ a ) ≤ E ( X ) ), a Pr( 5 or more people leaving with their own hats ) ≤ 1 5 ,

  52. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E By Markov’s inequality (recall: Pr( X ≥ a ) ≤ E ( X ) ), a Pr( 5 or more people leaving with their own hats ) ≤ 1 5 , Pr( at least 1 person leaving with their own hat ) ≤ 1 1 = 1 .

  53. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E By Markov’s inequality (recall: Pr( X ≥ a ) ≤ E ( X ) ), a Pr( 5 or more people leaving with their own hats ) ≤ 1 5 , Pr( at least 1 person leaving with their own hat ) ≤ 1 1 = 1 . (sometimes Markov’s inequality is not particularly informative)

  54. Markov’s inequality E XAMPLE n people go to a party, leaving their hats at the door. Each person leaves with a random hat. How many people leave with their own hat? For j ∈ { 1 , . . . , n } , let indicator r.v. I j = 1 if the j th person gets their own hat, otherwise I j = 0 . By linearity of expectation. . . E By Markov’s inequality (recall: Pr( X ≥ a ) ≤ E ( X ) ), a Pr( 5 or more people leaving with their own hats ) ≤ 1 5 , Pr( at least 1 person leaving with their own hat ) ≤ 1 1 = 1 . (sometimes Markov’s inequality is not particularly informative) In fact, here it can be shown that as n → ∞ , the probability that at least one person leaves with their own hat is 1 − 1 e ≈ 0 . 632 .

  55. Markov’s inequality C OROLLARY If X is a non-negative r.v. that only takes integer values, then Pr( X > 0) = Pr( X ≥ 1) ≤ E ( X ) . For an indicator r.v. I , the bound is tight ( = ), as Pr( I > 0) = E ( I ) .

  56. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1

  57. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This is the probability at least one of the events happens

  58. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1

  59. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty)

  60. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty) P ROOF

  61. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty) P ROOF Define indicator r.v. I j to be 1 if event V j happens, otherwise I j = 0 .

  62. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty) P ROOF Define indicator r.v. I j to be 1 if event V j happens, otherwise I j = 0 . Let the r.v. X = � k j =1 I j be the number of events that happen.

  63. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty) P ROOF Define indicator r.v. I j to be 1 if event V j happens, otherwise I j = 0 . Let the r.v. X = � k j =1 I j be the number of events that happen. � � k = Pr( X > 0) ≤ E ( X ) = E ( � k j =1 I j ) = � k � Pr j =1 E ( I j ) j =1 V j = � k j =1 Pr( V j )

  64. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty) P ROOF Define indicator r.v. I j to be 1 if event V j happens, otherwise I j = 0 . Let the r.v. X = � k j =1 I j be the number of events that happen. � � k = Pr( X > 0) ≤ E ( X ) = E ( � k j =1 I j ) = � k � Pr j =1 E ( I j ) j =1 V j = � k j =1 Pr( V j ) by previous Markov corollary

  65. Union bound T HEOREM (union bound) Let V 1 , . . . , V k be k events. Then k k � � � � Pr ≤ Pr( V i ) . V i i =1 i =1 This bound is tight ( = ) when the events are all disjoint. ( V i and V j are disjoint iff V i ∩ V j is empty) P ROOF Define indicator r.v. I j to be 1 if event V j happens, otherwise I j = 0 . Let the r.v. X = � k j =1 I j be the number of events that happen. � � k = Pr( X > 0) ≤ E ( X ) = E ( � k j =1 I j ) = � k � Pr j =1 E ( I j ) j =1 V j = � k j =1 Pr( V j ) by previous Markov corollary Linearity of expectation

Recommend


More recommend