Motivation Sampling Distributions Multiple Regression Analysis - Inference of the OLS Estimators Testing Hypotheses About a Single Population Caio Vigo Parameter Testing Against One-Sided Alternatives The University of Kansas Testing Against Two-Sided Alternatives Department of Economics Testing Other Hypotheses about the β j Fall 2019 Computing p -Values for t Tests Practical (Economic) versus Statistical Significance Confidence Intervals These slides were based on Introductory Econometrics by Jeffrey M. Wooldridge (2015) Testing Multiple 1 / 99 Exclusion
Topics Motivation 1 Motivation Sampling Distributions 2 Sampling Distributions of the OLS Estimators of the OLS Estimators 3 Testing Hypotheses About a Single Population Parameter Testing Hypotheses Testing Against One-Sided Alternatives About a Single Population Testing Against Two-Sided Alternatives Parameter Testing Other Hypotheses about the β j Testing Against One-Sided Alternatives Computing p -Values for t Tests Testing Against Two-Sided Practical (Economic) versus Statistical Significance Alternatives Testing Other Hypotheses about 4 Confidence Intervals the β j Computing p -Values for t Tests 5 Testing Multiple Exclusion Restrictions Practical (Economic) versus Statistical Significance R -Squared Form of the F Statistic Confidence The F Statistic for Overall Significance of a Regression Intervals Testing Multiple 2 / 99 Exclusion
Motivation for Inference Motivation Sampling Distributions of the OLS Estimators Goal: We want to test hypothesis about the parameters β j in the population Testing Hypotheses regression model. About a Single Population Parameter We want to know whether the true parameter β j = some value (your hypothesis). Testing Against One-Sided Alternatives Testing Against Two-Sided Alternatives Testing Other • In order to do that, we will need to add a final assumption MLR.6 . We will obtain Hypotheses about the β j the Classical Linear Model (CLM) Computing p -Values for t Tests Practical (Economic) versus Statistical Significance Confidence Intervals Testing Multiple 3 / 99 Exclusion
Motivation for Inference Motivation MLR.1: y = β 0 + β 1 x 1 + β 2 x 2 + ... + β k x k + u Sampling Distributions MLR.2: random sampling from the population of the OLS Estimators MLR.3: no perfect collinearity in the sample Testing MLR.4: E ( u | x 1 , ..., x k ) = E ( u ) = 0 (exogenous explanatory variables) Hypotheses MLR.5: V ar ( u | x 1 , ..., x k ) = V ar ( u ) = σ 2 (homoskedasticity) About a Single Population Parameter Testing Against One-Sided MLR.1 - MLR.5 : Needed to compute Alternatives V ar (ˆ Testing Against β j ) : Two-Sided Alternatives MLR.1 - MLR.4 : Needed for Testing Other σ 2 Hypotheses about unbiasedness of OLS: V ar (ˆ the β j β j ) = Computing p -Values SST j (1 − R 2 j ) for t Tests Practical (Economic) SSR E (ˆ β j ) = β j σ 2 = versus Statistical ˆ Significance ( n − k − 1) Confidence Intervals and for efficiency of OLS ⇒ BLUE . Testing Multiple 4 / 99 Exclusion
Topics Motivation 1 Motivation Sampling Distributions 2 Sampling Distributions of the OLS Estimators of the OLS Estimators 3 Testing Hypotheses About a Single Population Parameter Testing Hypotheses Testing Against One-Sided Alternatives About a Single Population Testing Against Two-Sided Alternatives Parameter Testing Other Hypotheses about the β j Testing Against One-Sided Alternatives Computing p -Values for t Tests Testing Against Two-Sided Practical (Economic) versus Statistical Significance Alternatives Testing Other Hypotheses about 4 Confidence Intervals the β j Computing p -Values for t Tests 5 Testing Multiple Exclusion Restrictions Practical (Economic) versus Statistical Significance R -Squared Form of the F Statistic Confidence The F Statistic for Overall Significance of a Regression Intervals Testing Multiple 5 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling • Now we need to know the full sampling distribution of the ˆ Distributions β j . of the OLS Estimators • The Gauss-Markov assumptions don’t tell us anything about these distributions. Testing Hypotheses About a Single Population • Based on our models, (conditional on { ( x i 1 , ..., x ik ) : i = 1 , ..., n } ) Parameter we need to have dist (ˆ Testing Against β j ) = f ( dist ( u )) , i.e., One-Sided Alternatives Testing Against Two-Sided ˆ Alternatives β j ∼ pd f ( u ) Testing Other Hypotheses about the β j Computing p -Values for t Tests Practical (Economic) versus Statistical Significance • That’s why we need one more assumption. Confidence Intervals Testing Multiple 6 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Distributions of the OLS Estimators Testing MRL.6 (Normality) Hypotheses About a Single Population The population error u is independent of the explanatory variables ( x 1 , ..., x k ) and is Parameter normally distributed with mean zero and variance σ 2 : Testing Against One-Sided Alternatives Testing Against Two-Sided u ∼ Normal (0 , σ 2 ) Alternatives Testing Other Hypotheses about the β j Computing p -Values for t Tests Practical (Economic) versus Statistical Significance Confidence Intervals Testing Multiple 7 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Distributions of the OLS Estimators MLR.1 - MLR.4 − → unbiasedness of OLS Testing Hypotheses About a Single Population Parameter Gauss-Markov assumptions: MLR.1 - MLR.4 + MLR.5 (homoskedastic errors) Testing Against One-Sided Alternatives Testing Against Two-Sided Alternatives Testing Other Hypotheses about the β j Classical Linear Model (CLM): Gauss-Markov + MLR.6 (Normally distributed Computing p -Values for t Tests errors) Practical (Economic) versus Statistical Significance Confidence Intervals Testing Multiple 8 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Distributions u ∼ Normal (0 , σ 2 ) of the OLS Estimators Testing Hypotheses • Strongest assumption. About a Single Population Parameter Testing Against • MLR.6 implies ⇒ zero conditional mean ( MLR.4 ) and homoskedasticity ( MLR.5 ) One-Sided Alternatives Testing Against Two-Sided • Now we have full independence between u and ( x 1 , x 2 , ..., x k ) (not just mean and Alternatives Testing Other variance independence) Hypotheses about the β j Computing p -Values for t Tests Practical (Economic) • Reason to call x j independent variables . versus Statistical Significance Confidence • Recall the Normal distribution properties (see slides for Appendix B ). Intervals Testing Multiple 9 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Figure: Distribution of u : u ∼ N (0 , σ 2 ) Distributions of the OLS Estimators Testing Hypotheses About a Single Population Parameter Testing Against One-Sided Alternatives Testing Against Two-Sided Alternatives Testing Other Hypotheses about the β j Computing p -Values for t Tests Practical (Economic) versus Statistical Significance Confidence Intervals Testing Multiple 10 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Figure: f ( y | x ) with homoskedastic normal errors, i.e., u ∼ N (0 , σ 2 ) Distributions of the OLS Estimators Testing Hypotheses About a Single Population Parameter Testing Against One-Sided Alternatives Testing Against Two-Sided Alternatives Testing Other Hypotheses about the β j Computing p -Values for t Tests Practical (Economic) versus Statistical Significance Confidence Intervals Testing Multiple 11 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Distributions • Property of a Normal distribution: if W ∼ Normal then a + bW ∼ Normal of the OLS for constants a and b . Estimators Testing Hypotheses About a Single • What we are saying is that for normal r.v.s, any linear combination of them is also Population Parameter normally distributed. Testing Against One-Sided Alternatives Testing Against • Because the u i are independent and identically distributed ( iid ) as Normal (0 , σ 2 ) Two-Sided Alternatives Testing Other Hypotheses about n the β j � � ˆ β j , V ar (ˆ � Computing p -Values β j = β j + w ij u i ∼ Normal β j ) for t Tests Practical (Economic) i =1 versus Statistical Significance Confidence • Then we can apply the Central Limit Theorem. Intervals Testing Multiple 12 / 99 Exclusion
Sampling Distributions of the OLS Estimators Motivation Sampling Distributions of the OLS Theorem: Normal Sampling Distributions Estimators Under the CLM assumptions, conditional on the sample outcomes of the Testing Hypotheses explanatory variables, About a Single Population Parameter � � ˆ β j , V ar (ˆ Testing Against β j ∼ Normal β j ) One-Sided Alternatives Testing Against and so Two-Sided Alternatives Testing Other Hypotheses about ˆ the β j β j − β j Computing p -Values ∼ Normal (0 , 1) for t Tests sd (ˆ β j ) Practical (Economic) versus Statistical Significance Confidence Intervals Testing Multiple 13 / 99 Exclusion
Recommend
More recommend