Making Sense of Multimodal Learning Analytics (MMLA) Marcelo Worsley Assistant Professor, Learning Sciences & Computer Science
Agenda - What is MMLA? - Why use MMLA? - How to use MMLA? - Examples of MMLA research - On-going challenges and future directions - Resources
What is MMLA? Learning Analy9cs—a set of mul9-modal sensory inputs, that can be used to predict, understand and quan9fy student learning. (Worsley and Blikstein, 2011)
What is MMLA? Mul9modal learning analy9cs (MMLA) (Blikstein & Worsley, in press.; Blikstein, 2013; Worsley, 2012) sits at the intersec9on of three ideas: mul9modal teaching and learning, mul9modal data, and computer-supported analysis. At its essence, MMLA u9lizes and triangulates among non-tradi9onal as well as tradi9onal forms of data in order to characterize or model student learning in complex learning environments. However, as we describe later, the ways that researchers u9lize mul9modal data vary widely. (Worsley, Abrahamson, Blikstein, Grover, Schneider and Tissenbaum, 2016)
What is MMLA?
What is MMLA?
What is MMLA?
VIDEO AUDIO TEXT EYE TRACKING DATA DEPTH CAMERA Pose Speaker Diariza9on Emo9on/Affect Fixa9ons Gestures Gaze Prosody Complexity Fixa9on Paeern Body Posi9on Gestures Spectrum Cohesion Pupil Dila9on Joint Tracking Object Manipula9on Facial Expressions Emo9on/Affect Seman9cs Eye loca9on Proxemics Intensity Syntac9cs Aeen9on Scene Understanding Speaking Rate Content Head Pose Movement Collabora9on/Turn-Taking Heartrate Voice Quality Object Tracking SMARTPHONES LEAP MOTION EDA WATCH OTHER MODALITIES EVENT LOGS Signg/Standing Gestures Arousal fMRI Ac9vity Loca9on Tool Usage Cogni9ve Load ELECTROCARDIOLOGY Noise Level Hand Movement Body Temperature ELECTROENCEPHALOGY Finger Movement Gestures ELECTROMYOGRAPHY Heartrate ELECTROGASTROGRAPHY Stress
Why use MMLA? - Teaching and learning are mul9modal - Study and support complex learning environments - See the hard to see - Inform design of mul9modal technologies - Expand no9ons of learning to non- tradi9onal modali9es - Improve accessibility and inclusivity - Triangulate across modali9es
Why use MMLA? - Visualizing/Represen9ng informa9on for human inference - Predic9on of indicators - Data-driven interven9ons - Evalua9ng conjecture-based learning designs
How to use MMLA?
How to use MMLA?
How to use MMLA? DATA CAPTURE SOFTWARE Open Social Signal Interpreta9on Lab Streaming Layer Open Pipe Kit Open Broadcaster Sohware Mul9sense (AV Recorder) iMo9ons Aeen9on Tool
How to use MMLA? DATA CAPTURE SOFTWARE Open Social Signal Interpreta9on Lab Streaming Layer Open Pipe Kit Open Broadcaster Sohware Mul9sense (AV Recorder) iMo9ons Aeen9on Tool
How to use MMLA? DATA CAPTURE SOFTWARE Open Social Signal Interpreta9on Lab Streaming Layer Open Pipe Kit Open Broadcaster Sohware Mul9sense (AV Recorder) iMo9ons Aeen9on Tool
How to use MMLA? DATA CAPTURE SOFTWARE Open Social Signal Interpreta9on Lab Streaming Layer Open Pipe Kit Open Broadcaster Sohware Mul9sense (AV Recorder) iMo9ons Aeen9on Tool
How to use MMLA? DATA CAPTURE SOFTWARE Open Social Signal Interpreta9on Lab Streaming Layer Open Pipe Kit Open Broadcaster Sohware Mul9sense (AV Recorder) iMo9ons Aeen9on Tool
How to use MMLA?
How to use MMLA? Pre-processing is important for data synchroniza9on, accoun9ng for individual differences between par9cipants, and gegng data in the appropriate format for data extrac9on. It tends to vary by data type.
How to use MMLA?
AUDIO VIDEO TEXT EYE TRACKING DATA DEPTH CAMERA HTK OpenFace (gaze) Natural Language Toolkit Praat PyGaze OpenFace (face recogni9on) OpenNUI OpenSmile Lightside Ogama Emo9ent FACET OpenEar LIWC Kinect for Windows Covarep Noldus Face Reader EyeTrackingR ELAN Cohmetrix OpenCV QSRLib Tobii SDK ICSI Diarizer Affec9va Tone Analyzer Matlab SMI SDK Libfreenect Microsoh Face API Stanford Parser LIUM Diarizer iMo9ons Aeen9on Tool Microsoh Emo9on API CMU Sphinx Wordnet Google ASR Intraface Pupil Dila9on Sen9wordnet AT&T Watson ASR Mul9sense Bing Speech API Mallet Eularian Magnifica9on Code Audacity Word2Vec iMo9ons Emovoice DATA EXTRACTION EDA WATCH SOFTWARE Ledalab Open Social Signal Interpreta9on
How to use MMLA?
How to use MMLA? Human Coders
Fusion Let’s say that you want to study engagement and have audio, video, bio-physiological and gesture data available for analysis. How do you use these to compute a measure of engagement?
How to use MMLA?
How to use MMLA?
Examples
Xbox Kinect RapidMiner Researcher(s) - Audio (Talk) - XMeans - Video (Pose/ Gaze) - Gestures (Hand Movement)
Custom Xbox Kinect RapidMiner Researcher(s) - Audio (Talk) - XMeans - Video (Pose/ Gaze) - Gestures (Hand Movement)
Challenges & Future Directions
Simplifying Data Capture & Analysis
Better Visualization/Inference Tools
Best Practices Around Data Fusion and Data Analysis Pipelines
Applications (especially as it relates to inclusive technology and providing feedback to learners and teachers)
Resources Mul9modal Learning Analy9cs Special Interest Group - hep://sigmla.org CrossMMLA Workshops EC-TEL 2017, LAK 2018 (tenta9ve) MMLA Workshops @ ICMI, LAK, ICLS (2012 – Present) SOLAR LASI – Mul9modal Learning Analy9cs Tutorial Workshops Journal of Learning Analy9cs Special Sec9on on Mul9modal Learning Analy9cs CIRCL Cyberlearning Report
Berland, M., Baker, R. S., & Blikstein, P. (2014). Educa9onal Data Mining and Learning Analy9cs: Applica9ons to Construc9onist Research. Technology, Knowledge and Learning , 19 (1–2), 205–220. Blikstein, P., & Worsley, M. (2016). Mul9modal Learning Analy9cs and Educa9on Data Mining : using computa9onal technologies to measure complex learning tasks, 3 (2), 220–238. heps://doi.org/ hep://dx.doi.org/10.18608/jla.2016.32.11 Fouse, A. S. (2011). ChronoViz : A system for suppor9ng naviga9on of 9me-coded data. Chi , 1–6. heps://doi.org/10.1145/1979742.1979706 Grafsgaard, J. F. (2014). Mul9modal Analysis and Modeling of Nonverbal Behaviors During Tutoring. In Proceedings of the 16th Interna=onal Conference on Mul=modal Interac=on (pp. 404–408). New York, NY, USA: ACM. heps://doi.org/10.1145/2663204.2667611 Leong, C. W., Chen, L., Feng, G., Lee, C. M., & Mulholland, M. (2015). U9lizing depth sensors for analyzing mul9modal presenta9ons: Hardware, sohware and toolkits. ICMI 2015 - Proceedings of the 2015 ACM Interna=onal Conference on Mul=modal Interac=on , (3), 547–556. heps://doi.org/10.1145/2818346.2830605 Luz, S. (2013). Automa9c iden9fica9on of experts and performance predic9on in the mul9modal math data corpus through analysis of speech interac9on. Proceedings of the 15th ACM on Interna=onal Conference on Mul=modal Interac=on - ICMI ’13 , 575–582. heps://doi.org/10.1145/2522848.2533788 Morency, L.-P., Oviae, S., Scherer, S., Weibel, N., & Worsley, M. (2013). ICMI 2013 grand challenge workshop on mul9modal learning analy9cs. Proceedings of the 15th ACM on Interna=onal Conference on Mul=modal Interac=on - ICMI ’13 , 373–378. heps://doi.org/10.1145/2522848.2534669 Oviae, S., & Cohen, A. (2013). Wrieen and mul9modal representa9ons as predictors of exper9se and problem-solving success in mathema9cs. … 15th ACM on Interna=onal Conference on Mul=modal … , 1–8. Retrieved from hep://dl.acm.org/cita9on.cfm?id=2533793 Scherer, S., Worsley, M., & Morency, L.-P. (2012). 1st interna9onal workshop on mul9modal learning analy9cs. In ICMI (pp. 609–610). Schneider, B., & Blikstein, P. (2015). Unraveling Students’ Interac9on Around a Tangible Interface using Mul9modal Learning Analy9cs. Journal of Educa=onal Data Mining . Spikol, D. (2017). Using Mul9modal Learning Analy9cs to Iden9fy Aspects of Collabora9on in Project-Based Learning Introduc9on PELARS system and context. Cscl , (June), 263–270. heps://doi.org/ 10.22318/cscl2017.37 Thompson, K. (2013). Using micro-paeerns of speech to predict the correctness of answers to mathema9cs problems: an exercise in mul9modal learning analy9cs. … 15th ACM on Interna=onal Conference on Mul=modal … . Retrieved from hep://dl.acm.org/cita9on.cfm?id=2533792 Worsley, M. (2012). Mul9modal Learning Analy9cs - Enabling the future of learning through mul9modal data analysis and interfaces. Interna=onal Conference on Mul=modal Interac=on , 12–15. heps:// doi.org/10.1145/2388676.2388755 Worsley, M., Scherer, S., Morency, L. P., & Blikstein, P. (2016). Exploring Behavior Representa9on for Learning Analy9cs. Proceedings of the 2015 Interna=onal Conference on Mul=modal Interac=on (ICMI) , 251–258. Xavier Ochoa, Marcelo Worsley, Katherine Chiluiza, & Saturnino Luz. (2014). MLA’14: Third Mul9modal Learning Analy9cs Workshop and Grand Challenges. Proceedings of the 16th Interna=onal Conference on Mul=modal Interac=on , 531–532. heps://doi.org/10.1145/2663204.2668318
Marcelo Worsley Assistant Professor Electrical Engineering and Computer Science & Learning Sciences marcelo.worsley@northwestern.edu Marceloworsley.com 9ilt.northwestern.edu 39
Recommend
More recommend