intelligent tutoring systems a meta analysis meta analysis
play

Intelligent Tutoring Systems: A Meta-Analysis Meta-Analysis - PowerPoint PPT Presentation

Intelligent Tutoring Systems: A Meta-Analysis Meta-Analysis Wenting Ma March, 2011 Meta-Analysis Traditional methods of review focus on statistical significance testing Significance testing is not well suited to highly dependent on


  1. Intelligent Tutoring Systems: A Meta-Analysis Meta-Analysis Wenting Ma March, 2011

  2. Meta-Analysis � Traditional methods of review focus on statistical significance testing � Significance testing is not well suited to � highly dependent on sample size � highly dependent on sample size � null finding does not carry to same “weight” as a significant finding � Meta-analysis changes the focus to the direction and magnitude of the effects across studies

  3. Effect Size: The Key to Meta-Analysis � The effect size makes meta-analysis possible � it is the dependent variable � it standardizes findings across studies such that they can be directly compared they can be directly compared

  4. Strengths of Meta-Analysis � Imposes a discipline on the process of summing up research findings � Capable of finding relationships across studies that are obscured in other approaches that are obscured in other approaches � Protects against over-interpreting differences across studies � Can handle a large numbers of studies (this would overwhelm traditional approaches to review)

  5. Purpose of the Study � This review synthesizes researches on the effectiveness of intelligent tutoring systems in computer-based learning environments.

  6. Intelligent Tutoring Systems (ITS) � It emerged as an interdisciplinary field with origins in cognitive science, artificial intelligence and education (Conati, 2009).

  7. Theoretical Framework � Empirical studies have shown that one-to-one tutoring is a highly effective form of instruction that produces high levels of academic achievement and promotes knowledge construction (Bloom, 1984; Cohen, Kulik, & Kulik, construction (Bloom, 1984; Cohen, Kulik, & Kulik, 1982; Beck, Stern, & Haugsjaa, 1996; Corbett, 2001; Graesser, Jackson, Mathews, Mitchell, Olney, Ventura, Chipman, Franceschetti, Hu, Louwerse, Person, &TRG, 2003; Razzaq, & Heffernan, 2004).

  8. Theoretical Framework � The purpose of ITS research is to provide the cognitive benefits of one-to-one tutoring for every child. � Like human tutors, ITS are capable of � Like human tutors, ITS are capable of assessing students’ knowledge, generating individualized instructions and learning activities, assisting the repair of knowledge gaps and promoting learning gains (Arnott, Hastings, & Allbritton, 2008).

  9. Theoretical Framework � Student modeling is a fundamental component for user adaptation in ITS research that distinguishes it from non-adaptive learning environments (Mitrovic, Koedinger, learning environments (Mitrovic, Koedinger, & Martin, 2003).

  10. Research Questions � What are the learning effects of intelligent tutoring learning environments in comparison with non-adaptive learning environments? � How do these effects vary when intelligent � How do these effects vary when intelligent tutors are used for learning in different knowledge domains, settings, and at educational levels? � How are these effect sizes influenced by methodological features of the research?

  11. Method � Selection Criteria � (a) They conducted research that compared how much students learned from ITS with how much they learned from non-intelligent computer-based learning or conventional classroom instruction. or conventional classroom instruction. � (b) They reported measurable cognitive outcomes such as recall, transfer, or a mix of both; � (c) They reported sufficient data to allow for effect size calculations; � (d) They were publicly available online or in library archives.

  12. Method � Selection of Studies � Search in the following databases including Digital Dissertations, ERIC, Springers, ACM Digital Library, Science Direct, PsycINFO, and Web of Science. Science Direct, PsycINFO, and Web of Science. � The key words applied in the search include “pedagogic* agent (s)”, “intellige* tutor(s)”, “intellige* tutoring system(s)”, “intellige* cognitive tutor (s)”, “intellige* agent(s)”, and “personalized virtual learning environments”.

  13. Method � Selection of Studies � In the initial screening phase, the abstracts of the articles were compared with criteria a, b, and d to filter out irrelevant studies. � After the initial screening, the 125 articles that met the inclusion criteria were retrieved and saved for further review of the full texts. � Data from the articles that met all inclusion criteria were coded using a pre-defined coding form and coding instructions developed for this meta-analysis.

  14. Method � Selection of Studies � Finally, 24 studies (involving 1,445 participants) passed all inclusion criteria and were coded for further analyses. further analyses. � All effect sizes were calculated with Hedges’ correction for bias due to small sample sizes (Lipsey & Wilson, 2001).

  15. Distribution of Effect Sizes Figure 1 . Distribution of 24 independent effect sizes obtained from 14 articles (M = .61, SD = .49)

  16. Table 1: Overall Weighted Mean Effect Size Effect size 95% confidence Test of null Test of heterogeneity interval I 2 (%) N k g SE Lower Upper z Q df p All 0.00 65.76 1,445 24 0.68 0.05 0.57 0.78 12.50* 67.17 23 * p < .05

  17. TABLE 2: Weighted Mean Effect Sizes by Study and Participant Characteristics N k g SE Lower Upper z Q df p I 2 (%) Educational Level Elementary school (K-5) 480 8 0.60 0.09 0.42 0.78 6.45* 13.51 7 0.06 48.18 Middle school (grades 6-8) Middle school (grades 6-8) 173 173 5 0.57 0.15 0.27 0.87 3.74* 4.78 4 0.31 16.30 Post-secondary 690 10 0.80 0.08 0.64 0.95 9.94* 44.64 9 0.00 79.84 Mixed grades 102 1 0.49 0.20 0.10 0.88 2.46* 0.00 0 1.00 0.00 Within-levels (Qw) 62.92 20 0.00 Between-levels (Q B ) 4.25 3 0.24 �� � �����

  18. N k g SE Lower Upper z Q df p I 2 (%) Subject/Domain Mathematics 238 5 0.30 0.13 0.04 0.55 2.30* 3.35 4 0.50 0.00 Computer Science 636 8 0.84 0.08 0.67 1.00 10.07* 38.47 7 0.00 81.81 Physics/Chemistry 333 7 0.65 0.11 0.43 0.87 5.77* 10.39 6 0.11 42.26 Humanities Humanities 238 238 4 0.71 0.13 0.45 0.97 5.38* 2.35 3 0.50 0.00 Within-levels (Q B ) 54.57 20 0.00 Between-levels (Q B ) 12.61 3 0.01 �� � �����

  19. TABLE 3: Weighted Mean Effect Sizes by Study Design Effect size (g) 95% confidence Test of Test of heterogeneity interval null N k g SE Lower Uppe Z Q df p I 2 (%) r Design Random assignment 1,097 18 0.83 0.06 0.70 0.95 13.16* 27.68 17 0.05 38.59 Non-random assignment 260 3 0.16 0.12 -0.08 0.41 1.33 0.21 2 0.90 0.00 Not Reported 88 3 0.49 0.22 0.06 0.93 2.22* 15.63 2 0.00 87.21 Within-levels (Qw) 43.52 21 0.00 Between-levels (Q B ) 23.65 2 0.00 Setting Laboratory 965 20 0.63 0.07 0.50 0.76 9.64* 26.78 19 0.11 29.05 Classroom 480 4 0.77 0.10 0.58 0.96 8.05* 38.94 3 0.00 92.30 Within-levels (Qw) 65.72 22 0.00 Between-levels (Q B ) 1.45 1 0.23 �� � �����

  20. TABLE 4: Weighted Mean Effect Sizes by Methodological Quality Effect size (g) 95% confidence Test of Test of heterogeneity interval null I 2 (%) N k g SE Lower Upper z Q df p Confidence in effect size Confidence in effect size Low 172 3 0.40 0.15 0.10 0.70 2.59* 8.14 2 0.02 75.43 High 1,273 55.29 20 0.00 63.83 21 0.72 0.06 0.60 0.83 12.38* Within-levels (Qw) 63.43 22 0.00 Between-levels (Q B ) 3.74 1 0.05 * p < .05

  21. TABLE 4: Weighted Mean Effect Sizes by Methodological Quality N k g SE Lower Upper z Q df p I 2 (%) Treatment Fidelity Low 82 2 0.19 0.22 -0.24 0.61 0.85 1.11 1 0.29 9.63 High 1,363 22 0.71 0.06 0.60 0.82 12.69* 60.63 21 0.00 65.37 Within-levels (Qw) 61.74 22 0.00 Between-levels (Q B ) 5.43 1 0.02 * p < .05

  22. TABLE 4: Weighted Mean Effect Sizes by Methodological Quality N k g SE Lower Upper z Q df p I 2 (%) Publication Source Journal 1,154 17 0.76 0.06 0.64 0.88 12.52* 39.98 16 0.00 59.98 Conference Proceeding 291 7 7 0.35 0.35 0.12 0.12 0.12 0.12 0.59 0.59 2.96* 2.96* 17.90 17.90 60 60 0.01 0.01 66.47 66.47 Within-levels (Qw) 57.87 22 0.00 Between-levels (Q B ) 9.30 1 0.00 * p < .05

  23. Scientific Implications � an overall statistically detectable learning benefit for students who learned from ITS, compared to their peers in conventional classrooms or non-adaptive computer-based learning environments. � The learning effects produced by intelligent tutors were obtained across a variety of subject domains and all educational levels. � � The benefits of ITS were evident in laboratory and classroom The benefits of ITS were evident in laboratory and classroom settings. � The claim that ITS are effective learning environments is consistent with our analysis of research quality which found that the treatment fidelity of the learning environment and publication in peer-reviewed journals is positively correlated with students’ learning gains.

  24. Possible Further Studies � What differentiate ITS from non-adaptive learning systems? � Why ITS improve learning gains across studies? studies? � What factors, including subject domains, level of participants, institutions etc, contribute most to the learning gains?

Recommend


More recommend