Educational Robotics
1 Personalizing Robot Tutors to Individuals’ Learning Differences Leyzberg, Spaulding, Scassellati [2014] 2
The Problem Bloom’s Two Sigma: “Tutored students scored in the 95 th To what extent does percentile, or two sigmas personalization of a tutor above the mean on average, robot’s lessons impact compared to students who the performance of received traditional classroom students? instruction”. Can we replicate the benefits of one-on-one teacher to student interaction? 3
keepon “In previous work, the authors demonstrated that physically embodied robot tutors produce greater learning gains than on-screen virtual agents delivering the same lessons” “We chose Keepon because it is particularly well suited to expressive non-threatening social communication” Keepon acted as both a host & tutor 4
Keepon 5
Nonograms “To ensure the greatest likelihood of participants starting the study at the same skill level, we chose a puzzle game that is relatively obscure to American audiences” Researchers identified 10 general strategies for solving nonograms. 6
Experiment 80 Participants between 18 & 40 years old. No CS students. ✘ Between Subjects design ✘ 4 groups ✘ No Lessons ○ Randomized but Relevant ○ Additive Skill Assessment ○ Bayesian Skill Assessment ○ Same 4 puzzles given to all participants ✘ 4th puzzle was the first puzzle, but rotated ○ 3 lessons per puzzle ✘ Triggered when a participant made no move for 45 seconds ○ or filled in 25%, 50%, or 75% of the puzzle Only actionable lessons presented ○ 7
Personalized Skill Assessment The skill assessments were ✘ “Both algorithms take as ✘ based on an online assessment input the moves participants the participants took. make in each puzzle and produce as output a live updated vector of ten elements, each representing the likelihood that the participant has mastered one of ten predefined skills” 8
Approach: Additive p i => positive indicator; takes as input the previous and current world states and determines whether skill i could have been applied w p => weight of positive indicator; 50% n i => negative indicator; takes as input the world state and determines if the skill i is applicable; w n => weight of negative indicator; 1% d => starting value; 50% 9
Approach: Bayesian Skills are either learned or not ✘ learned => probability a skill is ✘ learned => probability rule was ✘ learned, regardless of current state => probability ✘ that the rule will make the transition to the learned state if it is not learned P(LEARNED) = 0 at t=0 ✘ P(MISTAKE) = .5 at t=0 ✘ 10
Experiment Results: Puzzles “Between personalized lesson groups, the Bayesian group did significantly better on the last puzzle than the Additive group, t(37) = 0.05” 11
Experiment Results: Participant Feedback three open-ended ✘ questions and five Likert-scale questions Personalized ✘ lessons group rated the lessons as significantly more relevant 12
Conclusion “we saw a “one sigma,” or one standard deviation, improvement in ✘ participants final puzzle solving time from those received personalized lessons over those receiving randomized-but-relevant lessons” “Some participants in the personalized groups claimed to be ✘ unaffected by the lessons but applied a lesson’s content more frequently immediately after receiving lessons” “We find that participants who received personalized lessons ✘ outperformed participants who received non-personalized lessons in a pre-test/post-test performance metric” 13
Discussion Does the task based nature of the experiment model all learning? ✘ Value of using different skill assessments ✘ Students reported no benefit, but the results say otherwise ✘ Passive Learning ✘ Clippy. Modify the experiment to only provide help when participant is ✘ frustrated Nonograms are obscure to a US audience ✘ Modify the experiment to test how individual ✘ How did the assessment algorithms handle multiple actionable ✘ strategies? Expand to teaching social skills? ✘ What would you do with a Keepon? 14
2 Robo-Blocks: Designing Debugging Abilities in a Tangible Programming System for early Primary School Children Sipitakiat, Nusen [2012] 15
Seymour Papert In many schools today, the phrase "computer-aided instruction" means making the computer teach the child. One might say the computer is being used to program the child. In my vision, the child programs the computer and, in doing so, both acquires a sense of mastery over a piece of the most modern and powerful technology and establishes an intimate contact with some of the deepest ideas from science, from mathematics, and from the art of intellectual model building.
LOGO & Turtles
Robo-Blocks Like a physical version of Scratch. 18
Carver & Klahr’s Debugging Model Compare the goal and the actual outcome ✘ to determine if debugging is necessary. Articulate the bug by describing the ✘ discrepancy between the goal and the actual outcome. Pinpoint the bug in the program using ✘ various clues and techniques. Correcting the bug and testing the ✘ outcome. 19
Debugging Tools Introduced Step by step block execution ✘ Debugging Flags ✘ The Protractor ✘ 20
Evaluation 52 students from 4 different schools ✘ Maze Game + Turtle Geometry ✘ Debugging Tools introduced as ✘ students faced difficulties Followed Carver & Klahr’s strategy ○ Data Collection ✘ Journaled Observations ○ Student interviews ○ Video Recordings of the session ○ 21
Results “It was clear to the researchers that the debugging tools played a significant role in helping the children to move forward and accomplish their task” “Our interview with children showed that they think the step-by-step function was the most useful, followed by the flags, and lastly the protractor. “ 22
Discussion Product Research v Academic Research ✘ Focus Group Study v Research Paper ✘ No hypothesis/Control Group ✘ Extending the debugging ability to allow for breakpoints ✘ Value of debugging ✘ Dynamicland ✘ 23
Recommend
More recommend