An Introduction to Asymptotic Theory Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Asymptotic Theory 1 / 20
Five Weapons in Asymptotic Theory Five Weapons in Asymptotic Theory Ping Yu (HKU) Asymptotic Theory 2 / 20
Five Weapons in Asymptotic Theory Five Weapons The weak law of large numbers (WLLN, or LLN) The central limit theorem (CLT) The continuous mapping theorem (CMT) Slutsky’s theorem The Delta method Notations : - In nonlinear (in parameter) models, the capital letters such as X denote random variables or random vectors and the corresponding lower case letters such as x denote the potential values they may take. - Generic notation for a parameter in nonlinear environments (e.g., nonlinear models or nonlinear constraints) is θ , while in linear environments is β . Ping Yu (HKU) Asymptotic Theory 3 / 20
Five Weapons in Asymptotic Theory The WLLN Definition p A random vector Z n converges in probability to Z as n ! ∞ , denoted as Z n � ! Z , if for any δ > 0, n ! ∞ P ( k Z n � Z k > δ ) = 0 . lim Although the limit Z can be random, it is usually constant. [intuition] p The probability limit of Z n is often denoted as plim ( Z n ) . If Z n � ! 0, we denote Z n = o p ( 1 ) . When an estimator converges in probability to the true value as the sample size diverges, we say that the estimator is consistent . Consistency is an important preliminary step in establishing other important asymptotic approximations. Theorem (WLLN) Suppose X 1 , ��� , X n , ��� are i.i.d. random vectors, and E [ k X k ] < ∞ ; then as n ! ∞ , n X n � 1 ∑ p X i ! E [ X ] . � n i = 1 Ping Yu (HKU) Asymptotic Theory 4 / 20
Five Weapons in Asymptotic Theory The CLT Definition d A random k vector Z n converges in distribution to Z as n ! ∞ , denoted as Z n ! Z , if � n ! ∞ F n ( z ) = F ( z ) , lim at all z where F ( � ) is continuous, where F n is the cdf of Z n and F is the cdf of Z . Usually, Z is normally distributed, so all z 2 R k are continuity points of F . If Z n converges in distribution to Z , then Z n is stochastically bounded and we denote Z n = O p ( 1 ) . Rigorously, Z n = O p ( 1 ) if 8 ε > 0, 9 M ε < ∞ such that P ( k Z n k > M ε ) < ε for any n . If Z n = o p ( 1 ) , then Z n = O p ( 1 ) . We can show that o p ( 1 ) + o p ( 1 ) = o p ( 1 ) , o p ( 1 ) + O p ( 1 ) = O p ( 1 ) , O p ( 1 ) + O p ( 1 ) = O p ( 1 ) , o p ( 1 ) o p ( 1 ) = o p ( 1 ) , o p ( 1 ) O p ( 1 ) = o p ( 1 ) , and O p ( 1 ) O p ( 1 ) = O p ( 1 ) . Theorem (CLT) suppose X 1 , ��� , X n , ��� are i.i.d. random k vectors, E [ X ] = µ , and Var ( X ) = Σ ; then � � p n d X n � µ � ! N ( 0 , Σ ) . Ping Yu (HKU) Asymptotic Theory 5 / 20
Five Weapons in Asymptotic Theory Comparison Betwen the WLLN and CLT The CLT tells more than the WLLN. � � p n p d X n � µ ! N ( 0 , Σ ) implies X n ! µ , so the CLT is stronger than the WLLN. � � p X n � ! µ means X n � µ = o p ( 1 ) , but does not provide any information about � � � � = O p ( 1 ) or X n � µ = O p ( n � 1 / 2 ) . p n . The CLT tells that p n X n � µ X n � µ But the WLLN does not require the second moment finite; that is, a stronger result is not free. Ping Yu (HKU) Asymptotic Theory 6 / 20
Five Weapons in Asymptotic Theory The CMT Theorem (CMT) Suppose X 1 , ��� , X n , ��� are random k vectors, and g is a continuous function on the support of X (to R l ) a.s. P X ; then p p X n � ! X = ) g ( X n ) � ! g ( X ) ; d d X n � ! X = ) g ( X n ) � ! g ( X ) . The CMT allows the function g to be discontinuous but the probability of being at a discontinuity point is zero. For example, the function g ( u ) = u � 1 is discontinuous at u = 0, but if d d ! X � N ( 0 , 1 ) then P ( X = 0 ) = 0 so X � 1 ! X � 1 . X n � � n Ping Yu (HKU) Asymptotic Theory 7 / 20
Five Weapons in Asymptotic Theory Slutsky’s Theorem In the CMT, X n converges to X jointly in various modes of convergence. p For the convergence in probability ( � ! ), marginal convergence implies joint convergence, so there is no problem if we substitute joint convergence by marginal convergence. But for the convergence in distribution ( d d d ! ), X n ! X , Y n ! Y does not imply � � � � X n � � X � d . � ! Y n Y Nevertheless, there is a special case where this result holds, which is Slutsky’s theorem. Theorem (Slutsky’s Theorem) � X n � � X � � � p d d d If X n � ! X, Y n � ! c ( ) Y n � ! c , where c is a constant, then � ! . Y n c d d d ! cX, Y � 1 ! c � 1 X when c 6 = 0 . Here This implies X n + Y n ! X + c, Y n X n n X n � � � X n , Y n , X , c can be understood as vectors or matrices as long as the operations are compatible. Ping Yu (HKU) Asymptotic Theory 8 / 20
Five Weapons in Asymptotic Theory Applications of the CMT and Slutsky’s Theorem Example p d d ! Σ ; then Y � 1 / 2 ! Σ � 1 / 2 N ( 0 , Σ ) = N ( 0 , I ) , Suppose X n � ! N ( 0 , Σ ) , and Y n � X n � n where I is the identity matrix. (why?) Example p d d n Y � 1 ! Σ ; then X 0 ! χ 2 Suppose X n � ! N ( 0 , Σ ) , and Y n � n X n � k , where k is the dimension of X n . (why?) Another important application of Slutsky’s theorem is the Delta method. Ping Yu (HKU) Asymptotic Theory 9 / 20
Five Weapons in Asymptotic Theory The Delta Method Theorem Suppose p n ( Z n � c ) d ! Z � N ( 0 , Σ ) , c 2 R k , and g ( z ) : R k ! R . If dg ( z ) � dz 0 is continuous at c, then p n ( g ( Z n ) � g ( c )) d ! dg ( c ) � dz 0 Z. Proof. p p ndg ( c ) n ( g ( Z n ) � g ( c )) = ( Z n � c ) , dz 0 where c is between Z n and c . p n ( Z n � c ) p d � ! Z implies that Z n � ! c , so by the CMT, dz 0 . By Slutsky’s theorem, p n ( g ( Z n ) � g ( c )) has the asymptotic p dg ( c ) ! dg ( c ) � dz 0 distribution dg ( c ) dz 0 Z . The Delta method implies that asymptotically, the randomness in a transformation of Z n is completely controlled by that in Z n . Ping Yu (HKU) Asymptotic Theory 10 / 20
Asymptotics for the MoM Estimator Asymptotics for the MoM Estimator Ping Yu (HKU) Asymptotic Theory 11 / 20
Asymptotics for the MoM Estimator The MoM Estimator Recall that the MoM estimator is defined as the solution to n 1 ∑ m ( X i j θ ) = 0 . n i = 1 We can prove the MoM estimator is consistent and asymptotically normal (CAN) under some regularity conditions. Specifically, the asymptotic distribution of the MoM estimator is � � � 0 � 1 � p d b 0 , M � 1 Ω M n θ � θ 0 � ! N , where M = dE [ m ( X j θ 0 )] and Ω = E [ m ( X j θ 0 ) m ( X j θ 0 ) 0 ] . d θ 0 The asymptotic variance takes a sandwich form and can be estimated by its sample analog. Ping Yu (HKU) Asymptotic Theory 12 / 20
Asymptotics for the MoM Estimator Derivation of the Asymptotic Distribution of the MoM Estimator n m ( X i j b 1 ∑ θ ) = 0 n i = 1 � � n n dm ( X i j θ ) ) 1 ∑ m ( X i j θ 0 ) + 1 ∑ b = θ � θ 0 = 0 d θ 0 n n i = 1 i = 1 � � � 1 � � ) p n n n dm ( X i j θ ) b 1 1 ∑ ∑ θ � θ 0 m ( X i j θ 0 ) = = � p n n d θ 0 i = 1 i = 1 d � M � 1 N ( 0 , Ω ) � ! ? � � p n n b 1 ∑ � M � 1 m ( X i j θ 0 ) , so � M � 1 m ( X i j θ 0 ) is called the influence θ � θ 0 � p n i = 1 function . h dm ( X j θ 0 ) i We use dE [ m ( X j θ 0 )] instead of E because E [ m ( X j θ )] is more smooth d θ 0 d θ 0 than m ( X j θ ) and can be applied to such situations as quantile estimation where m ( X j θ ) is not differentiable at θ 0 . In this course, we will not meet such cases. Ping Yu (HKU) Asymptotic Theory 13 / 20
Asymptotics for the MoM Estimator Intuition for the Asymptotic Distribution of the MoM Estimator Suppose E [ X ] = g ( θ 0 ) with g 2 C ( 1 ) in a neighborhood of θ 0 ; then θ 0 = g � 1 ( E [ X ]) � h ( E [ X ]) . (what are m , M and Ω here?) The MoM estimator of θ is to set X = g ( θ ) , so b θ = h ( X ) . p p ! E [ X ] ; then by the CMT, b By the WLLN, X θ ! h ( E [ X ]) = θ 0 since h ( � ) is � � continuous. � � � = p nh 0 � X � �� Now, p n = p n � � = b θ � θ 0 h ( X ) � h ( E [ X ]) X � E [ X ] h 0 � X � � p n � � X � E [ X ] , where the second equality is from the mean value theorem (MVT). Because X � is between X and E [ X ] and X ! E [ X ] , X � p p � � ! E [ X ] . By the CMT, h 0 � X � � � � ! h 0 ( E [ X ]) . By the CLT, p n p d X � E [ X ] ! N ( 0 , Var ( X )) . � � Then by Slutsky’s theorem, � � p d ! h 0 ( E [ X ]) N ( 0 , Var ( X )) b n θ � θ 0 � � � � � ? 0 , Var ( X ) 0 , h 0 ( E [ X ]) 2 Var ( X ) = N = N . g 0 ( θ 0 ) 2 Ping Yu (HKU) Asymptotic Theory 14 / 20
Recommend
More recommend