chapter 10
play

Chapter 10 Design of Experiments and Analysis of Variance Elements - PowerPoint PPT Presentation

Chapter 10 Design of Experiments and Analysis of Variance Elements of a Designed Experiment Response variable Also called the dependent variable Factors (quantitative and qualitative) Also called the independent variables Factor


  1. Chapter 10 Design of Experiments and Analysis of Variance

  2. Elements of a Designed Experiment • Response variable Also called the dependent variable • Factors (quantitative and qualitative) Also called the independent variables • Factor Levels • Treatments • Experimental Unit

  3. Elements of a Designed Experiment Designed vs. Observational Experiment • In a Designed Experiment, the analyst determines the treatments, methods of assigning units to treatments. • In an Observational Experiment, the analyst observes treatments and responses, but does not determine treatments • Many experiments are a mix of designed and observational

  4. Elements of a Designed Experiment Single-Factor Experiment Population of Interest Sample Independent Variable Dependent Variable

  5. Elements of a Designed Experiment Two-factor Experiment

  6. The Completely Randomized Design Achieved when the samples of experimental units for each treatment are random and independent of each other Design is used to compare the treatment means:      H : ... 0 1 2 k H : A t lea st tw o o f th e trea tm en t m ea n s d iffer a

  7. The Completely Randomized Design • The hypotheses are tested by comparing the differences between the treatment means to the amount of sampling variability present • Test statistic is calculated using measures of variability within treatment groups and measures of variability between treatment groups

  8. The Completely Randomized Design Sum of Squares for Treatments (SST) Measure of the total variation between treatment means, with k treatments k   2 Calculated by    S S T n x x i i  i 1 Where  th n n u m b e r o f o b s e r v a tio n s in i tr e a tm e n t g r o u p i th  x m e a n o f m e a s u r e m e n ts in i tr e a tm e n t g r o u p i  x o v e r a ll m e a n o f a ll m e a s u r e m e n ts

  9. The Completely Randomized Design Sum of Squares for Error (SSE) Measure of the variability around treatment means attributable to sampling error Calculated by n n n 1 2 k   2   2   2           S S E x x x x x x ... j j k j k 1 1 2 2    j 1 j 1 j 1 After substitution, SSE can be rewritten as         2   2    2 S S E n s n s n s 1 1 ... 1 1 1 2 2 k k

  10. The Completely Randomized Design Mean Square for Treatments (MST) Measure of the variability among treatment means S S T  M S T  k 1 Mean Square for Error (MSE) Measure of sampling variability within treatments S S E  M S E  n k

  11. The Completely Randomized Design F-Statistic Ratio of MST to MSE M S T    F w ith d f k n k , ( 1, ) M S E Values of F close to 1 suggest that population means do not differ Values further away from 1 suggest variation among means exceeds that within means, supports H a

  12. The Completely Randomized Design Conditions Required for a Valid ANOVA F- Test: Completely Randomized Design 1. Independent, randomly selected samples. 2. All sampled populations have distributions that approximate normal distribution 3. The k population variances are equal

  13. The Completely Randomized Design A Format for an ANOVA summary table ANOVA Summary Table for a Completely Randomized Design d f Source S S M S F S S T M S T  k  T reatm ents SST M S T 1  k 1 M S E S S E   E rror SSE M S E n k  n k   n  T otal S S T o ta l 1

  14. The Completely Randomized Design ANOVA summary table: an example from Excel

  15. The Completely Randomized Design Conducting an ANOVA for a Completely Randomized Design 1. Assure randomness of design, and independence, randomness of samples 2. Check normality, equal variance assumptions 3. Create ANOVA summary table 4. Conduct multiple comparisons for pairs of means as necessary/desired 5. If H 0 not rejected, consider possible explanations, keeping in mind the possibility of a Type II error

  16. Multiple Comparisons of Means • A significant F-test in an ANOVA tells you that the treatment means as a group are statistically different. • Does not tell you which pairs of means differ statistically from each other • With k treatment means, there are c different pairs of means that can be compared, with c calculated as    k k 1  c 2

  17. Multiple Comparisons of Means • Three widely used techniques for making multiple comparisons of a set of treatment means • In each technique, confidence intervals are constructed around differences between means to facilitate comparison of pairs of means • Selection of technique is based on experimental design and comparisons of interest • Most statistical analysis packages provide the analyst with a choice of the procedures used by the three techniques for calculating confidence intervals for differences between treatment means

  18. Multiple Comparisons of Means Guidelines for Selecting a Multiple Comparisons Method in ANOVA Method Treatment Sample Sizes Types of Comparisons Tukey Equal Pairwise Bonferroni Equal or Unequal Pairwise Scheffe Equal or Unequal General Contrasts

  19. The Randomized Block Design Two-step procedure for the Randomized Block Design: 1. Form b blocks (matched sets of experimental units) of p units, where p is the number of treatments. 2. Randomly assign one unit from each block to each treatment. (Total responses, n=bp )

  20. The Randomized Block Design Partitioning Sum of Squares k   2    S S T b x x T i  i 1 b   2    S S B p x x B i  i 1 n 2      S S T o ta l ( ) x x i  i 1    S S E S S T o ta l ( ) S S T S S B

  21. The Randomized Block Design Calculating Mean Squares Setting Hypotheses S S T        M S T H : ...  0 1 2 k k 1 H : A t le a s t tw o tr e a tm e n t m e a n s d iffe r a S S E  M S E    n b k 1 Hypothesis Testing M S T  F M S E Rejection region: F > F  , F  based on (k-1), (n-b-k+1) degrees of freedom

  22. The Randomized Block Design Conditions Required for a Valid ANOVA F- Test: Randomized Block Design 1. The b blocks are randomly selected, all k treatments are randomly applied to each block 2. Distributions of all bk combinations are approximately normal 3. The bk distributions have equal variances

  23. The Randomized Block Design Conducting an ANOVA for a Randomized Block Design 1. Ensure design consists of blocks, random assignment of treatments to units in block 2. Check normality, equal variance assumptions 3. Create ANOVA summary table 4. Conduct multiple comparisons for pairs of means as necessary/desired 5. If H 0 not rejected, consider possible explanations, keeping in mind the possibility of a Type II error 6. If desired, conduct test of H 0 that block means are equal

  24. Factorial Experiments Complete Factorial Experiment • Every factor-level combination is utilized Schematic Layout of Two-Factor Factorial Experiment Factor B at b levels … Level 1 2 3 b … 1 Trt.1 Trt.2 Trt.3 Trt. b … 2 Trt. b +1 Trt. b +2 Trt. b +3 Trt.2 b F a ctor … 3 Trt.2 b +1 Trt.2 b +2 Trt.2 b +3 Trt.3 b A a t a … … … … … … Levels … a Trt.( a -1) b +1 Trt.( a -1) b +2 Trt.( a -1) b +3 Trt. ab

  25. Factorial Experiments Partitioning Total Sum of Squares • Usually done with statistical package

  26. Factorial Experiments Conducting an ANOVA for a Factorial Design 1. Partition Total Sum of Squares into Treatment and Error components 2. Test H 0 that treatment means are equal. If H 0 is rejected proceed to step 3 3. Partition Treatment Sum of Squares into Main Effect and Interaction Sum of Squares 4. Test H 0 that factors A and B do not interact. If H 0 is rejected, go to step 6. If H 0 is not rejected, go to step 5. 5. Test for main effects of Factor A and Factor B 6. Compare the treatment means

  27. Factorial Experiments SPSS ANOVA Output for a factorial experiment

Recommend


More recommend