convergence of random variables
play

Convergence of Random Variables Saravanan Vijayakumaran - PowerPoint PPT Presentation

Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay March 19, 2014 1 / 15 Motivation Theorem (Weak Law of Large Numbers) Let X 1 , X 2 , . . .


  1. Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay March 19, 2014 1 / 15

  2. Motivation Theorem (Weak Law of Large Numbers) Let X 1 , X 2 , . . . be a sequence of independent identically distributed random variables with finite means µ . Their partial sums S n = X 1 + X 2 + · · · + X n satisfy S n P − → µ as n → ∞ . n Theorem (Central Limit Theorem) Let X 1 , X 2 , . . . be a sequence of independent identically distributed random variables with finite means µ and finite non-zero variance σ 2 . Their partial sums S n = X 1 + X 2 + · · · + X n satisfy √ � S n � D → N ( 0 , σ 2 ) n n − µ − as n → ∞ . 2 / 15

  3. Modes of Convergence • A sequence of real numbers { x n : n = 1 , 2 , . . . } is said to converge to a limit x if for all ε > 0 there exists an m ε ∈ N such that | x n − x | < ε for all n ≥ m ε . • We want to define convergence of random variables but they are functions from Ω to R • The solution • Derive real number sequences from sequences of random variables • Define convergence of the latter in terms of the former • Four ways of defining convergence for random variables • Convergence almost surely • Convergence in r th mean • Convergence in probability • Convergence in distribution 3 / 15

  4. Convergence Almost Surely • Let X , X 1 , X 2 , . . . be random variables on a probability space (Ω , F , P ) • For each ω ∈ Ω , X ( ω ) and X n ( ω ) are reals • X n → X almost surely if { ω ∈ Ω : X n ( ω ) → X ( ω ) as n → ∞} is an event whose probability is 1 a.s. • “ X n → X almost surely” is abbreviated as X n − − → X Example • Let Ω = [ 0 , 1 ] and P be the uniform distribution on Ω • P ( ω ∈ [ a , b ]) = b − a for 0 ≤ a ≤ b ≤ 1 • Let X n be defined as � 0 , 1 ω ∈ � � n , n X n ( ω ) = � 1 � 0 , ω ∈ n , 1 • Let X ( ω ) = 0 for all ω ∈ [ 0 , 1 ] a.s. • X n − − → X 4 / 15

  5. Convergence in r th Mean • Let X , X 1 , X 2 , . . . be random variables on a probability space (Ω , F , P ) • Suppose E [ | X r | ] < ∞ and E [ | X r n | ] < ∞ for all n • X n → X in r th mean if | X n − X | r � � E → 0 as n → ∞ where r ≥ 1 r • “ X n → X in r th mean” is abbreviated as X n − → X 1 • For r = 1, X n − → X is written as “ X n → X in mean” 2 m.s. • For r = 2, X n − → X is written as “ X n → X in mean square” or X n − − → X Example • Let Ω = [ 0 , 1 ] and P be the uniform distribution on Ω • Let X n be defined as � 0 , 1 n , ω ∈ � � n X n ( ω ) = � 1 ω ∈ � 0 , n , 1 • Let X ( ω ) = 0 for all ω ∈ [ 0 , 1 ] • E [ | X n | ] = 1 and so X n does not converge in mean to X 5 / 15

  6. Convergence in Probability • Let X , X 1 , X 2 , . . . be random variables on a probability space (Ω , F , P ) • X n → X in probability if P ( | X n − X | > ǫ ) → 0 as n → ∞ for all ǫ > 0 P • “ X n → X in probability” is abbreviated as X n − → X Example • Let Ω = [ 0 , 1 ] and P be the uniform distribution on Ω • Let X n be defined as � � 0 , 1 � n , ω ∈ n X n ( ω ) = � 1 � 0 , ω ∈ n , 1 • Let X ( ω ) = 0 for all ω ∈ [ 0 , 1 ] • For ε > 0, P [ | X n − X | > ε ] = P [ | X n | > ε ] ≤ P [ X n = n ] = 1 n − → 0 P • X n − → X 6 / 15

  7. Convergence in Distribution • Let X , X 1 , X 2 , . . . be random variables on a probability space (Ω , F , P ) • X n → X in distribution if P ( X n ≤ x ) → P ( X ≤ x ) as n → ∞ for all points x where F X ( x ) = P ( X ≤ x ) is continuous D • “ X n → X in distribution” is abbreviated as X n − → X • Convergence in distribution is also termed weak convergence Example Let X be a Bernoulli RV taking values 0 and 1 with equal probability 1 2 . Let X 1 , X 2 , X 3 , . . . be identical random variables given by X n = X for all n . D The X n ’s are not independent but X n − → X . D Let Y = 1 − X . Then X n − → Y . But | X n − Y | = 1 and the X n ’s do not converge to Y in any other mode. 7 / 15

  8. Relations between Modes of Convergence Theorem a.s. ( X n → X ) − − ⇒ P D ⇒ ( X n → X ) ( X n → X ) − − ⇒ r ( X n → X ) − for any r ≥ 1 . 8 / 15

  9. Convergence in Probability Implies Convergence in Distribution P • Suppose X n − → X • Let F n ( x ) = P ( X n ≤ x ) and F ( x ) = P ( X ≤ x ) • If ε > 0, P ( X n ≤ x ) F n ( x ) = P ( X n ≤ x , X ≤ x + ε ) + P ( X n ≤ x , X > x + ε ) = ≤ F ( x + ε ) + P ( | X n − X | > ε ) F ( x − ε ) P ( X ≤ x − ε ) = = P ( X ≤ x − ε, X n ≤ x ) + P ( X ≤ x − ε, X n > x ) ≤ F n ( x ) + P ( | X n − X | > ε ) • Combining the above inequalities we have F ( x − ε ) − P ( | X n − X | > ε ) ≤ F n ( x ) ≤ F ( x + ε ) + P ( | X n − X | > ε ) • If F is continuous at x , F ( x − ε ) − → F ( x ) and F ( x + ε ) − → F ( x ) as ε ↓ 0 P • Since X n − → X , P ( | X n − X | > ε ) − → 0 as n − → ∞ 9 / 15

  10. Convergence in r th Mean Implies Convergence in Probability r s • If r > s ≥ 1 and X n − → X then X n − → X 1 1 s ≤ ( E [ | Y | r ]) • Lyapunov’s inequality: If r > s > 0, then ( E [ | Y | s ]) r r • If X n → X , then E [ | X n − X | r ] − − → 0 and 1 1 ( E [ | X n − X | s ]) s ≤ ( E [ | X n − X | r ]) r 1 P • If X n → X then X n − − → X • By Markov’s inequality, we have P ( | X n − X | > ε ) ≤ E ( | X n − X | ) ε for all ε > 0 10 / 15

  11. Convergence Almost Surely Implies Convergence in Probability • Let A n ( ε ) = {| X n − X | > ε } and B m ( ε ) = � n ≥ m A n ( ε ) a.s. • X n − − → X if and only if P ( B m ( ε )) − → 0 as m − → ∞ , for all ε > 0 • Let C = { ω ∈ Ω : X n ( ω ) − → X ( ω ) as n − → ∞} A ( ε ) = { ω ∈ Ω : ω ∈ A n ( ε ) for infinitely many values of n } ∞ � � = A n ( ε ) m n = m • X n ( ω ) − → X ( ω ) if and only if ω / ∈ A ( ε ) for all ε > 0 • P ( C ) = 1 if and only if P ( A ( ε )) = 0 for all ε > 0 • B m ( ε ) is a decreasing sequence of events with limit A ( ε ) • P ( A ( ε )) = 0 if and only if P ( B m ( ε )) − → 0 as m − → ∞ • Since A n ( ε ) ⊆ B n ( ε ) , we have P ( | X n − X | > ε ) = P ( A n ( ε )) − → 0 whenever P ( B n ( ε )) − → 0 a.s. P • Thus X n − − → X = ⇒ X n − → X 11 / 15

  12. Some Converses D P • If X n − → c , where c is a constant, then X n − → c D P ( | X n − c | > ε ) = P ( X n < c − ε ) + P ( X n > c + ε ) − → 0 if X n − → c • If P n ( ε ) = P ( | X n − X | > ε ) satisfies � n P n ( ε ) < ∞ for all ε > 0, then a.s. − − → X X n • Let A n ( ε ) = {| X n − X | > ε } and B m ( ε ) = � n ≥ m A n ( ε ) ∞ ∞ � � P ( B m ( ε )) ≤ P ( A n ( ε )) = P n ( ε ) − → 0 as m − → ∞ n = m n = m a.s. • X n − − → X if and only P ( B m ( ε )) − → 0 as m − → ∞ , for all ε > 0 12 / 15

  13. Borel-Cantelli Lemmas • Let A 1 , A 2 , . . . be an infinite sequence of events from (Ω , F , P ) • Consider the event that infinitely many of the A n occur ∞ � � A = { A n i.o. } = A m n m = n Theorem Let A be the event that infinitely many of the A n occur. Then • P ( A ) = 0 if � n P ( A n ) < ∞ , • P ( A ) = 1 if � n P ( A n ) = ∞ and A 1 , A 2 , A 3 , . . . are independent events Proof of first lemma. We have A ⊆ � ∞ m = n A m for all n ∞ � P ( A ) ≤ P ( A m ) → 0 as n → 0 m = n 13 / 15

  14. Proof of Second Borel-Cantelli Lemma ∞ A c = � � A c m n m = n � ∞ � � r � r ∞ � A c � A c � � P = r →∞ P lim = lim [ 1 − P ( A m )] = [ 1 − P ( A m )] m m r →∞ m = n m = n m = n m = n ∞ � ∞ � � � ≤ exp [ − P ( A m )] = exp − P ( A m ) = 0 m = n m = n Thus � ∞ � P ( A c ) = � A c n →∞ P lim = 0 m m = n 14 / 15

  15. Reference • Chapter 7, Probability and Random Processes , Grimmett and Stirzaker, Third Edition, 2001. 15 / 15

Recommend


More recommend