workshop 7 4a single factor anova
play

Workshop 7.4a: Single factor ANOVA Murray Logan 23 Nov 2016 - PowerPoint PPT Presentation

Workshop 7.4a: Single factor ANOVA Murray Logan 23 Nov 2016 Section 1 Revision Estimation Y X 3 0 2.5 1 6 2 5.5 3 9 4 8.6 5 12 6 3 . 0 = 0 1 + 1 0 + 1 2 . 5 = 0 1 + 1 1 + 1 6 . 0 = 0 1


  1. Workshop 7.4a: Single factor ANOVA Murray Logan 23 Nov 2016

  2. Section 1 Revision

  3. Estimation Y X 3 0 2.5 1 6 2 5.5 3 9 4 8.6 5 12 6 3 . 0 = β 0 × 1 + β 1 × 0 + ε 1 2 . 5 = β 0 × 1 + β 1 × 1 + ε 1 6 . 0 = β 0 × 1 + β 1 × 2 + ε 2 5 . 5 = β 0 × 1 + β 1 × 3 + ε 3

  4. Estimation 3 . 0 = β 0 × 1 + β 1 × 0 + ε 1 β 0 × 1 β 1 × 1 2 . 5 = + + ε 1 6 . 0 = β 0 × 1 + β 1 × 2 + ε 2 5 . 5 = β 0 × 1 + β 1 × 3 + ε 3 9 . 0 = β 0 × 1 + β 1 × 4 + ε 4 8 . 6 = β 0 × 1 + β 1 × 5 + ε 5 12 . 0 = β 0 × 1 + β 1 × 6 + ε 6     3 . 0 1 0   ε 1 2 . 5 1 1     ε 2       6 . 0 1 2     ( β 0 )   ε 3       5 . 5 = 1 3 × +       β 1 ε 4       9 . 0 1 4       � �� � ε 5       8 . 6 1 5 Parameter vector     ε 6 12 . 0 1 6 � �� � � �� � � �� � Residual vector Response values Model matrix

  5. Matrix algebra     3 . 0 1 0   ε 1 2 . 5 1 1     ε 2       6 . 0 1 2     ( β 0 )   ε 3       5 . 5 = 1 3 × +       β 1 ε 4       9 . 0 1 4       � �� �     ε 5   8 . 6 1 5 Parameter vector     ε 6 12 . 0 1 6 � �� � � �� � � �� � Residual vector Response values Model matrix Y = X β + ϵ ˆ β = ( X ′ X ) − 1 X ′ Y

  6. Section 2 Anova Param- eterization

  7. Simple ANOVA Three treatments (One factor - 3 levels), three replicates

  8. Simple ANOVA Two treatments, three replicates

  9. 0 G3 G2 0 1 0 8 G2 0 1 0 10 0 Y 0 1 11 G3 0 0 1 12 G3 0 7 1 1 3 A dummy1 dummy2 dummy3 --- --- -------- -------- -------- 2 G1 1 0 0 G1 0 1 0 0 4 G1 1 0 0 6 G2 0 Categorical predictor y ij = µ + β 1 ( dummy 1 ) ij + β 2 ( dummy 2 ) ij + β 3 ( dummy 3 ) ij + ε ij

  10. 1 1 G2 1 0 1 0 8 G2 1 0 1 0 10 G3 0 0 0 1 11 G3 1 0 0 1 12 G3 1 0 0 7 1 G1 0 Intercept dummy1 dummy2 dummy3 --- --- ----------- -------- -------- -------- 2 G1 1 1 0 0 3 1 Y 1 0 0 4 G1 1 1 0 0 6 G2 1 A Overparameterized y ij = µ + β 1 ( dummy 1 ) ij + β 2 ( dummy 2 ) ij + β 3 ( dummy 3 ) ij + ε ij

  11. Overparameterized y ij = µ + β 1 ( dummy 1 ) ij + β 2 ( dummy 2 ) ij + β 3 ( dummy 3 ) ij + ε ij • three treatment groups • four parameters to estimate • need to re-parameterize

  12. Categorical predictor y i = µ + β 1 ( dummy 1 ) i + β 2 ( dummy 2 ) + β 3 ( dummy 3 ) i + ε i i o n z a t r i e t e r a m p a a n s M e y i = β 1 ( dummy 1 ) i + β 2 ( dummy 2 ) i + β 3 ( dummy 3 ) i + ε ij y ij = α i + ε ij i = p

  13. 1 1 0 1 0 G2 7 0 0 G2 G2 6 0 0 1 G1 8 0 0 0 0 0 G3 12 1 0 G3 1 11 1 0 0 G3 10 0 4 0 2 Y G1 3 0 0 1 G1 1 --- --- -------- -------- -------- dummy3 dummy2 dummy1 A Categorical predictor n t i o i z a t e r a m e p a r n s M e a y i = β 1 ( dummy 1 ) i + β 2 ( dummy 2 ) i + β 3 ( dummy 3 ) i + ε i

  14. Categorical predictorDD n t i o i z a t e r a m e p a r n s M e a y i = α 1 D 1 i + α 2 D 2 i + α 3 D 3 i + ε i y i = α p + ε i , Y A where p = number of levels of the factor and D = 1 2.00 G1 dummy variables 2 3.00 G1       3 4.00 G1 2 1 0 0 ε 1 3 1 0 0 4 6.00 G2 ε 2             4 1 0 0 ε 3       5 7.00 G2         6 0 1 0 α 1 ε 4       6 8.00 G2        + 7 = 0 1 0 × α 2 ε 5        7 10.00 G3       8 0 1 0 α 3 ε 6             8 11.00 G3 10 0 0 1 ε 7             9 12.00 G3 11 0 0 1 ε 8       12 0 0 1 ε 9

  15. AG1 3 Pr(>|t|) t value Estimate Std. Error > summary ( lm (Y~-1+A))$coef 0.5773503 5.196152 2.022368e-03 AG2 7 0.5773503 12.124356 1.913030e-05 AG3 11 0.5773503 19.052559 1.351732e-06 Categorical predictor n t i o i z a t e r a m e p a r n s M e a Parameter Estimates Null Hypothesis α ∗ H 0 : α 1 = α 1 = 0 mean of group 1 1 α ∗ H 0 : α 2 = α 2 = 0 mean of group 2 2 α ∗ mean of group 3 H 0 : α 3 = α 3 = 0 3 but typically interested exploring effects

  16. Categorical predictor y i = µ + β 1 ( dummy 1 ) i + β 2 ( dummy 2 ) i + β 3 ( dummy 3 ) i + ε i n t i o z a e r i m e t a r a s p e c t E f f y ij = µ + β 2 ( dummy 2 ) ij + β 3 ( dummy 3 ) ij + ε ij y ij = µ + α i + ε ij i = p − 1

  17. 1 1 0 1 1 G2 7 0 1 G2 G2 6 0 0 1 G1 4 8 1 0 1 0 1 G3 12 1 0 G3 1 11 1 0 1 G3 10 0 0 1 G1 --- --- ------- -------- -------- Y A alpha dummy3 dummy2 2 G1 1 0 0 3 Categorical predictor n t i o i z a t e r a m e p a r t s e c E f f y i = α + β 2 ( dummy 2 ) i + β 3 ( dummy 3 ) i + ε i

  18. Categorical predictor n t i o i z a t e r a m e p a r s e c t E f f y i = α + β 2 D 2 i + β 3 D 3 i + ε i y i = α p + ε i , Y A where p = number of levels of the factor minus 1 and 1 2.00 G1 D = dummy variables 2 3.00 G1       2 1 0 0 ε 1 3 4.00 G1 3 1 0 0 ε 2       4 6.00 G2       4 1 0 0 ε 3               5 7.00 G2 6 1 1 0 µ ε 4             6 8.00 G2  + 7 = 1 1 0 × α 2 ε 5              7 10.00 G3 8 1 1 0 α 3 ε 6             10 1 0 1 ε 7 8 11.00 G3             11 1 0 1 ε 8       9 12.00 G3 12 1 0 1 ε 9

  19. 0.8164966 9.797959 6.506149e-05 Pr(>|t|) G1 0 0 G2 1 0 G3 0 1 > summary ( lm (Y~A))$coef Estimate Std. Error t value (Intercept) > contrasts (A) <-contr.treatment 3 0.5773503 5.196152 2.022368e-03 A2 4 0.8164966 4.898979 2.713682e-03 A3 8 2 3 Categorical predictor s a s t n t r c o e n t a t m T r e Parameter Estimates Null Hypothesis H 0 : µ = µ 1 = 0 Intercept mean of control group H 0 : α 2 = α 2 = 0 α ∗ mean of group 2 2 minus mean of control group α ∗ H 0 : α 3 = α 3 = 0 mean of group 3 3 minus mean of control group > contrasts (A)

  20. 0.8164966 9.797959 6.506149e-05 > summary ( lm (Y~A))$coef 8 A3 0.8164966 4.898979 2.713682e-03 4 A2 0.5773503 5.196152 2.022368e-03 3 (Intercept) Pr(>|t|) t value Estimate Std. Error Categorical predictor s a s t n t r c o e n t a t m T r e Parameter Estimates Null Hypothesis H 0 : µ = µ 1 = 0 Intercept mean of control group H 0 : α 2 = α 2 = 0 α ∗ mean of group 2 2 minus mean of control group α ∗ H 0 : α 3 = α 3 = 0 mean of group 3 3 minus mean of control group

  21. -1 -0.5 > contrasts (A) <- cbind ( c (0,1,-1), c (1, -0.5, -0.5)) G3 1 -0.5 G2 1.0 0 G1 [,1] [,2] Categorical predictor s a s t n t r c o n e d e f i d s e r U User defined contrasts Grp2 vs Grp3 Grp1 vs (Grp2 & Grp3) Grp1 Grp2 Grp3 α ∗ 0 1 -1 2 α ∗ 1 -0.5 -0.5 3 > contrasts (A)

  22. Categorical predictor s t s r a o n t d c i n e d e f e r U s • p − 1 comparisons (contrasts) • all contrasts must be orthogonal

  23. Categorical predictor t y a l i g o n t h o O r Four groups (A, B, C, D) p − 1 = 3 comparisons 1. A vs B :: A > B 2. B vs C :: B > C 3. A vs C ::

  24. > contrasts (A) <- cbind ( c (0,1,-1), c (1, -0.5, -0.5)) [,1] [,2] -1 -0.5 G3 1 -0.5 G2 1.0 0 G1 Categorical predictor s a s t n t r c o e d f i n d e s e r U > contrasts (A) 0 × 1 . 0 = 0 1 × − 0 . 5 − 0 . 5 = − 1 × − 0 . 5 = 0 . 5 = 0 sum

  25. 0.4714045 -8.485281 1.465426e-04 > summary ( lm (Y~A))$coef G3 -1 -0.5 > crossprod ( contrasts (A)) [,1] [,2] [1,] 2 0.0 [2,] 0 1.5 Estimate Std. Error G2 t value Pr(>|t|) (Intercept) 7 0.3333333 21.000000 7.595904e-07 A1 -2 0.4082483 -4.898979 2.713682e-03 A2 -4 1 -0.5 1.0 0 G1 [,1] [,2] > contrasts (A) <- cbind ( c (0,1,-1), c (1, -0.5, -0.5)) Categorical predictor s a s t n t r c o n e d e f i d s e r U > contrasts (A)

  26. 2.0 0 G1 1.0 1 G2 -0.5 -1 G3 -0.5 > crossprod ( contrasts (A)) > contrasts (A) <- cbind ( c (1, -0.5, -0.5), c (1,-1,0)) [,1] [,2] [1,] 1.5 1.5 [2,] 1.5 [,1] [,2] Categorical predictor t s r a s o n t d c i n e d e f r U s e > contrasts (A)

  27. Section 3 Partitioning of variance (ANOVA)

  28. ANOVA e a n c a r i g v n i n i o t i t P a r

Recommend


More recommend