Basic Results Examples Implementation A Framework for Hypothesis Tests in Statistical Models With Linear Predictors Georges Monette 1 John Fox 2 1 York University Toronto, Ontario, Canada 2 McMaster University Hamilton, Ontario, Canada useR 2009 Rennes Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results General Setting We have an estimator b of the p × 1 parameter vector β . b is asymptotically multivariate-normal, with asymptotic expectation β and estimated asymptotic positive-definite covariance matrix V . In the applications that we have in mind, β appears in a linear predictor η = x ′ β , where x ′ is a“design”vector of regressors. Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Linear Hypotheses We address linear hypotheses of the form H 1 : ψ 1 = L 1 β = 0 , where the k 1 × p hypothesis matrix L 1 of rank k 1 ≤ p contains pre-specified constants and 0 is the k 1 × 1 zero vector. As is well known, the hypothesis H 1 can be tested by the Wald statistic Z 1 = ( L 1 b ) ′ ( L 1 VL ′ 1 ) − 1 L 1 b , which is asymptotically distributed as chi-square with k 1 degrees of freedom. Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Nested Linear Hypotheses Consider another hypothesis H 2 : ψ 2 = L 2 β = 0 , where L 2 has k 2 < k 1 rows and is of rank k 2 , and 0 is the k 2 × 1 zero vector. Hypothesis H 2 is nested within the hypothesis H 1 if and only if the rows of L 2 lie in the space spanned by the rows of L 1 . Then the truth of H 1 (which is more restrictive than H 2 ) implies the truth of H 2 , but not vice-versa. Typically the rows of L 2 will be a proper subset of the rows of L 1 . The conditional hypothesis H 1 | 2 is that L 1 β = 0 | L 2 β = 0 . Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Testing Nested Hypotheses: Wald Test H 1 | 2 can be tested by the Wald statistic Z 1 | 2 = ( L 1 | 2 b ) ′ ( L 1 | 2 VL ′ 1 | 2 ) − 1 L 1 | 2 b , L 1 | 2 is the conjugate complement of the projection of the rows of L 2 into the row space of L 1 with respect to the inner product V . The conditional Wald statistic Z 1 | 2 is asymptotically distributed as chi-square with k 1 − k 2 degrees of freedom. Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Testing Nested Hypotheses: F Test In some models, such as a generalized linear model with a dispersion parameter estimated from the data, we can alternatively compute an F -test of H 1 | 2 as 1 ( L 1 | 2 b ) ′ ( L 1 | 2 VL ′ 1 | 2 ) − 1 L 1 | 2 b . F 1 | 2 = k 1 − k 2 If tests for all terms of a linear model are formulated in conformity with the principle of marginality, the conditional F -test produces so-called“Type-II”hypothesis tests. Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Sketch of Justification Let L ∗ be any r × p matrix whose rows extend the row space of L 2 to the row space of L 1 (i.e., r = k 1 − k 2 ), The hypothesis H ∗ : ψ ∗ = L ∗ β = 0 | H 2 : ψ 2 = L 2 β = 0 is equivalent to the hypothesis H 1 : L 1 β = 0 | H 2 : L 2 β = 0 and independent of the particular choice of L ∗ . Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Sketch of Justification The minimum-variance asymptotically unbiased estimator of ψ ∗ under the conditional null hypothesis is � � − 1 L 2 b = L ∗| 2 b C ∗ = L ∗ b − L ∗ VL ′ L 2 VL ′ � ψ 2 2 where � � − 1 L 2 L ∗| 2 = L ∗ − L ∗ VL ′ L 2 VL ′ 2 2 Thus the test of H 1 | 2 is based on the statistic � � − 1 � C ′ C L ∗| 2 VL ′ Z 1 | 2 = � ψ ψ ∗ ∗| 2 ∗ which is asymptotically distributed as chi-square with r degrees of freedom under H 1 given H 2 . Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Geometric Interpretion ψ 2 If L ∗ and L 2 are 1 × p , then the 2D confidence ellipse for ψ = [ ψ ∗ , ψ 2 ] ′ = L 1 β is based on the estimated asymptotic ● variance � ψ ) = L 1 VL ′ AsyVar ( � 1 . The unrestricted estimator � ψ ∗ is the perpendicular projection of � � � ′ = L 1 b onto the ψ ∗ axis. � ψ ∗ , � ψ = ψ 2 ● ● ψ * ψ C � ∗ is the oblique projection of � ψ onto the 0 C ψ ^ ψ ^ * * ψ ∗ axis along the direction conjugate to the ψ ∗ axis with respect to the inner � � − 1 . L 1 VL ′ product 1 Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Geometric Interpretion The dashed ellipse is the asymptotic 2D confidence ellipse, � � � 1 / 2 U L 1 VL ′ E 2 = � χ 2 ψ + 1 .95;2 where U is the unit-circle and χ 2 .95;2 is the .95 quantile of the chi-square distribution with two degrees of freedom. Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Basic Results Geometric Interpretion The solid ellipse � � � 1 / 2 U L 1 VL ′ χ 2 E 1 = � ψ + .95;1 1 is generated by changing the degrees of freedom to one. one-dimensional projections of E 1 are ordinary confidence intervals for linear combinations of ψ = [ ψ ∗ , ψ 2 ] ′ . Under H 2 , all projections onto the ψ ∗ axis are unbiased estimators of ψ ∗ with 95% confidence intervals given by the corresponding projection of the solid ellipse. The projection in the direction conjugate to the ψ ∗ axis — that is, along the line through the center of the confidence ellipse and through the points on the ellipse with horizontal tangents — yields the confidence interval with the smallest width. Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Examples Dummy Regression Suppose, for example, that we are interested in a dummy-regression model with linear predictor η = β 1 + β 2 x + β 3 d + β 4 xd where x is a covariate and d is a dummy regressor, taking on the values 0 and 1. Then the hypotheses H 2 : β 4 = 0 (that there is no interaction between x and d ) is nested within the hypothesis H 1 : β 3 = β 4 = 0 (that there is neither interaction between x and d nor a“main effect”of d ). Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Examples Dummy Regression (a) No Interaction (b) Interaction Y Y D = 1 D = 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● D = 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● D = 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● β 1 + β 3 β β 1 β 1 β 1 + β β 3 X X x 0 0 x Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Basic Results Examples Implementation Examples Dummy Regression In this case we have � 0 � 0 1 0 L 1 = 0 0 0 1 L 2 = [ 0, 0, 0, 1 ] The conditional hypothesis H 1 | 2 : β 3 = β 4 = 0 | β 4 = 0 can be restated as H 1 | 2 : β 3 = 0 | β 4 = 0 — that is, the hypothesis of no main effect of d assuming no interaction between x and d . Here ψ 1 = [ β 3 , β 4 ] ′ , ψ 2 = β 4 , and ψ ∗ = β 3 . Monette and Fox York and McMaster A Framework for Hypothesis Tests in Statistical Models With Linear Predictors
Recommend
More recommend