Sketch of proof Pdf of sum of two independent random variables is the convolution of their pdfs � ∞ f X + Y ( z ) = f X ( z − y ) f Y ( y ) d y y = −∞ Repeated convolutions of any pdf with bounded variance result in a Gaussian!
Repeated convolutions i = 1 i = 2 i = 3 i = 4 i = 5
Repeated convolutions i = 1 i = 2 i = 3 i = 4 i = 5
Iid exponential λ = 2, i = 10 2 9 8 7 6 5 4 3 2 1 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65
Iid exponential λ = 2, i = 10 3 30 25 20 15 10 5 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65
Iid exponential λ = 2, i = 10 4 90 80 70 60 50 40 30 20 10 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65
Iid geometric p = 0 . 4, i = 10 2 2.5 2.0 1.5 1.0 0.5 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2
Iid geometric p = 0 . 4, i = 10 3 7 6 5 4 3 2 1 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2
Iid geometric p = 0 . 4, i = 10 4 25 20 15 10 5 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2
Iid Cauchy, i = 10 2 0.30 0.25 0.20 0.15 0.10 0.05 20 15 10 5 0 5 10 15
Iid Cauchy, i = 10 3 0.30 0.25 0.20 0.15 0.10 0.05 20 15 10 5 0 5 10 15
Iid Cauchy, i = 10 4 0.30 0.25 0.20 0.15 0.10 0.05 20 15 10 5 0 5 10 15
Gaussian approximation to the binomial X is binomial with parameters n and p Computing the probability that X is in a certain interval requires summing its pmf over the interval Central limit theorem provides a quick approximation n � X = B i , E ( B i ) = p , Var ( B i ) = p ( 1 − p ) i = 1 1 n X is approximately Gaussian with mean p and variance p ( 1 − p ) / n X is approximately Gaussian with mean np and variance np ( 1 − p )
Gaussian approximation to the binomial Basketball player makes shot with probability p = 0 . 4 (shots are iid) Probability that she makes more than 420 shots out of 1000? Exact answer: 1000 � P ( X ≥ 420 ) = p X ( x ) x = 420 � 1000 � 1000 � 0 . 4 x 0 . 6 ( n − x ) = 10 . 4 10 − 2 = x x = 420 Approximation: P ( X ≥ 420 )
Gaussian approximation to the binomial Basketball player makes shot with probability p = 0 . 4 (shots are iid) Probability that she makes more than 420 shots out of 1000? Exact answer: 1000 � P ( X ≥ 420 ) = p X ( x ) x = 420 � 1000 � 1000 � 0 . 4 x 0 . 6 ( n − x ) = 10 . 4 10 − 2 = x x = 420 Approximation: �� � P ( X ≥ 420 ) ≈ P np ( 1 − p ) U + np ≥ 420
Gaussian approximation to the binomial Basketball player makes shot with probability p = 0 . 4 (shots are iid) Probability that she makes more than 420 shots out of 1000? Exact answer: 1000 � P ( X ≥ 420 ) = p X ( x ) x = 420 � 1000 � 1000 � 0 . 4 x 0 . 6 ( n − x ) = 10 . 4 10 − 2 = x x = 420 Approximation: �� � P ( X ≥ 420 ) ≈ P np ( 1 − p ) U + np ≥ 420 = P ( U ≥ 1 . 29 )
Gaussian approximation to the binomial Basketball player makes shot with probability p = 0 . 4 (shots are iid) Probability that she makes more than 420 shots out of 1000? Exact answer: 1000 � P ( X ≥ 420 ) = p X ( x ) x = 420 � 1000 � 1000 � 0 . 4 x 0 . 6 ( n − x ) = 10 . 4 10 − 2 = x x = 420 Approximation: �� � P ( X ≥ 420 ) ≈ P np ( 1 − p ) U + np ≥ 420 = P ( U ≥ 1 . 29 ) = 1 − Φ ( 1 . 29 ) = 9 . 85 10 − 2
Types of convergence Law of Large Numbers Central Limit Theorem Convergence of Markov chains
Convergence in distribution If a Markov chain converges in distribution, then the state vector converges to a constant vector � p ∞ := lim i →∞ � p � X ( i ) i →∞ T i = lim X � p � � X ( 0 )
Mobile phones ◮ Company releases new mobile-phone model ◮ At the moment 90% of the phones are in stock, 10% have been sold locally and none have been exported ◮ Each day a phone is sold with probability 0.2 and exported with probability 0.1 ◮ Initial state vector and transition matrix: 0 . 9 0 . 7 0 0 a := � , T � X = 0 . 1 0 . 2 1 0 0 0 . 1 0 1
Mobile phones 1 1 Exported Sold 0.7 0.2 0.1 In stock
Mobile phones Exported Sold In stock 0 5 10 15 20 Day
Mobile phones Exported Sold In stock 0 5 10 15 20 Day
Mobile phones Exported Sold In stock 0 5 10 15 20 Day
Mobile phones The company wants to know how many phones are eventually sold locally and how many exported i →∞ T i i →∞ � lim X ( i ) = lim X � p � p � � X ( 0 ) i →∞ T i = lim X � a �
Mobile phones The transition matrix T � X has three eigenvectors 0 0 0 . 80 , , � q 1 := 0 � q 2 := 1 � q 3 := − 0 . 53 1 0 0 . 27 The corresponding eigenvalues are λ 1 := 1, λ 2 := 1 and λ 3 := 0 . 7 Eigendecomposition of T � X : X := Q Λ Q − 1 T � λ 1 0 0 � � � Q := q 1 q 2 � q 3 � Λ := 0 λ 2 0 0 0 λ 3
Mobile phones We express the initial state vector � a in terms of the eigenvectors 0 . 3 Q − 1 � 0 . 7 p � X ( 0 ) = 1 . 122 so that � a = 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3
Mobile phones i →∞ T i lim X � a �
Mobile phones i →∞ T i i →∞ T i lim X � a = lim X ( 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3 ) � �
Mobile phones i →∞ T i i →∞ T i lim X � a = lim X ( 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3 ) � � i →∞ 0 . 3 T i q 1 + 0 . 7 T i q 2 + 1 . 122 T i = lim X � X � X � q 3 � � �
Mobile phones i →∞ T i i →∞ T i lim X � a = lim X ( 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3 ) � � i →∞ 0 . 3 T i q 1 + 0 . 7 T i q 2 + 1 . 122 T i = lim X � X � X � q 3 � � � i →∞ 0 . 3 λ i q 1 + 0 . 7 λ i q 2 + 1 . 122 λ i = lim 1 � 2 � 3 � q 3
Mobile phones i →∞ T i i →∞ T i lim X � a = lim X ( 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3 ) � � i →∞ 0 . 3 T i q 1 + 0 . 7 T i q 2 + 1 . 122 T i = lim X � X � X � q 3 � � � i →∞ 0 . 3 λ i q 1 + 0 . 7 λ i q 2 + 1 . 122 λ i = lim 1 � 2 � 3 � q 3 q 2 + 1 . 122 0 . 5 i � = lim i →∞ 0 . 3 � q 1 + 0 . 7 � q 3
Mobile phones i →∞ T i i →∞ T i lim X � a = lim X ( 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3 ) � � i →∞ 0 . 3 T i q 1 + 0 . 7 T i q 2 + 1 . 122 T i = lim X � X � X � q 3 � � � i →∞ 0 . 3 λ i q 1 + 0 . 7 λ i q 2 + 1 . 122 λ i = lim 1 � 2 � 3 � q 3 q 2 + 1 . 122 0 . 5 i � = lim i →∞ 0 . 3 � q 1 + 0 . 7 � q 3 = 0 . 3 � q 1 + 0 . 7 � q 2
Mobile phones i →∞ T i i →∞ T i lim X � a = lim X ( 0 . 3 � q 1 + 0 . 7 � q 2 + 1 . 122 � q 3 ) � � i →∞ 0 . 3 T i q 1 + 0 . 7 T i q 2 + 1 . 122 T i = lim X � X � X � q 3 � � � i →∞ 0 . 3 λ i q 1 + 0 . 7 λ i q 2 + 1 . 122 λ i = lim 1 � 2 � 3 � q 3 q 2 + 1 . 122 0 . 5 i � = lim i →∞ 0 . 3 � q 1 + 0 . 7 � q 3 = 0 . 3 � q 1 + 0 . 7 � q 2 0 = 0 . 7 0 . 3
Mobile phones 1.0 In stock Sold 0.8 Exported 0.6 0.4 0.2 0.0 0 5 10 15 20 Day
Mobile phones 0 � � Q − 1 � p � i →∞ T i lim X � p � X ( 0 ) = X ( 0 ) � � � 2 Q − 1 � p � X ( 0 ) 1 0 . 6 0 . 6 � , Q − 1 � b := 0 b = 0 . 4 (1) 0 . 4 0 . 75 0 . 4 0 . 23 , Q − 1 � c := � 0 . 5 c = 0 . 77 (2) 0 . 1 0 . 50
Initial state vector � b 1.0 In stock Sold 0.8 Exported 0.6 0.4 0.2 0.0 0 5 10 15 20 Day
Initial state vector � c 1.0 0.8 0.6 0.4 0.2 0.0 0 5 10 15 20 Day
Stationary distribution p stat is a stationary distribution of � � X if X � p stat = � T � p stat � p stat is an eigenvector with eigenvalue equal to one If � p stat is the initial state i →∞ � lim p � X ( i ) = � p stat
Reversibility Let � p ∈ R s X ( i ) be distributed according to a state vector � ( s = number of states) � X is reversible with respect to � p if � � � � X ( i ) = x j , � � X ( i ) = x k , � � X ( i + 1 ) = x k = P X ( i + 1 ) = x j P for all 1 ≤ j , k ≤ s This is equivalent to the detailed-balance condition � � � � T � kj � p j = T � jk � p k , for all 1 ≤ j , k ≤ s X X
Reversibility implies stationarity The detailed-balance condition provides a sufficient condition for stationarity If � p is a stationary distribution of � X is reversible with respect to � p , then � X � � X � T � p j
Reversibility implies stationarity The detailed-balance condition provides a sufficient condition for stationarity If � p is a stationary distribution of � X is reversible with respect to � p , then � X s � � � � � X � j = jk � T � p T � p k X k = 1
Reversibility implies stationarity The detailed-balance condition provides a sufficient condition for stationarity If � p is a stationary distribution of � X is reversible with respect to � p , then � X s � � � � � X � j = jk � T � p T � p k X k = 1 s � � � = T � kj � p j X k = 1
Reversibility implies stationarity The detailed-balance condition provides a sufficient condition for stationarity If � p is a stationary distribution of � X is reversible with respect to � p , then � X s � � � � � X � j = jk � T � p T � p k X k = 1 s � � � = T � kj � p j X k = 1 s � � � = � p j T � X kj k = 1
Recommend
More recommend