the second moment method
play

The Second-Moment Method Will Perkins January 28, 2013 Markovs - PowerPoint PPT Presentation

The Second-Moment Method Will Perkins January 28, 2013 Markovs Inequality Recall Markovs inequality: Pr[ | X | t ] E | X | t Proof: E | X | = | X | dP + | X | dP : | X | < t : | X | t 0 + t Pr[ | X |


  1. The Second-Moment Method Will Perkins January 28, 2013

  2. Markov’s Inequality Recall Markov’s inequality: Pr[ | X | ≥ t ] ≤ E | X | t Proof: � � E | X | = | X | dP + | X | dP ω : | X | < t ω : | X |≥ t ≥ 0 + t · Pr[ | X | ≥ t ]

  3. Markov’s Inequality What was special about the absolute value function? 1 non-negative 2 increasing We can apply the same proof to other functions.

  4. Chebyshev’s Inequality Theorem Chebyshev’s Inequality Pr[ | X − E X | ≥ t ] ≤ var ( X ) t 2 var( X ) = E [ | X − E X | 2 ] ≥ t 2 · Pr[ | X − E X | ≥ t ]

  5. Chebyshev’s Inequality For example, if X has mean 3, variance 1, then the probability that X is more than 5 or less than 1 is bounded by 1 / 4. Is the bound a good bound for the Normal distribution? Give an example of a random variable where the Chebyshev bound is tight.

  6. The Weak Law of Large Numbers As an application of Chebyshev’s Inequality, we can prove our first limit theorem. Theorem Let X 1 , X 2 , . . . be i.i.d. random variables with mean µ and variance σ 2 . Then for every ǫ > 0 , �� X 1 + · · · + X n � � � � Pr − µ � > ǫ → 0 as n → ∞ � � n �

  7. The Weak Law of Large Numbers Comments: What does the WLLN say about political polling, for instance? Are all of the conditions necessary? Why do we say ‘Weak’?

  8. The Weak Law of Large Numbers Proof: Let U n = X 1 + ... X n and calculate E U n and var( U n ) n E U n = µ var( X i ) = σ 2 var( U n ) = 1 � n 2 n Now apply Chebyshev: ≤ σ 2 �� � � X 1 + · · · + X n � � Pr − µ � > ǫ ǫ 2 n → 0 as n → ∞ � � n �

  9. First-Moment Method Let X be a counting r.v. If E X → 0 as n → ∞ , then Pr[ X = 0] → 1. But what if E X → ∞ ? Can we say that Pr[ X = 0] → 0? No, not necessarily. Example: Let X = n 2 with probability 1 / n and 0 with probability 1 − 1 / n . Then X = 0 whp, but E X → ∞ .

  10. Counting Random Variables If X is a counting random variable then by plugging in t = E X , Lemma Pr[ X = 0] ≤ var ( X ) ( E X ) 2 � ( E X ) 2 � In particular, if E X → ∞ and var ( X ) = o , then X ≥ 1 whp.

  11. An Example m balls thrown randomly into n bins. We saw with the first-moment method that if m = (1 + ǫ ) n log n , then whp there are no empty bins. But what if m = (1 − ǫ ) n log n ? Let X be the number of empty bins. � m � 1 − 1 E X = n · n If m = (1 − ǫ ) n log n then E X ∼ n ǫ → ∞ . But to conclude that X ≥ 1 whp, we need the second-moment method.

  12. Calculating the Variance Let X = X 1 + X 2 + · · · + X n . Then n � � var ( X ) = var ( X i ) + cov ( X i , X j ) i =1 i � = j Let X i be the indicator rv that the i th bin is empty.

  13. Upper Bounding the Variance Since we just need an upper bound to apply Chebyshev’s inequality, things become simpler. First, since the X i ’s are indicator rv’s, var ( X i ) ≤ E X i and � var ( X i ) ≤ E X . Since E X → ∞ for our choice of m , � ( E X ) 2 � � var ( X i ) = o

  14. Bounding the Covariances We need to bound � i � = j cov ( X i , X j ). Each of the terms are the same and there are n ( n − 1) of them. cov ( X i , X j ) = E ( X i X j ) − E X i E X j = Pr[ i and j empty] − Pr[ i empty ] · Pr[ j empty ] � m � 2 m � 1 − 2 � 1 − 1 = − n n We could calculate and show that this is small, but that is unnecessary: the covariance terms are negative.

  15. Conclusion We showed that for m = (1 − ǫ ) n log n , E X → ∞ and � ( E X ) 2 � var ( X ) = o Then using Chebyshev’s inequality with t = E X , we concluded that Pr[ X = 0] = o (1).

  16. Applications of the 2nd Moment Method What’s the ‘typical’ position of a simple random walk after n steps? What’s the longest run of Heads in n flips of a fair coin? What is the maximum of n independent standard normal RV’s?

Recommend


More recommend