sarlr self adaptive recommendation of learning resources
play

SARLR: Self-adaptive Recommendation of Learning Resources Authors: - PowerPoint PPT Presentation

ITS 2018 SARLR: Self-adaptive Recommendation of Learning Resources Authors: Liping Liu, Wenjun Wu and Jiankun Huang Institution: State Key Lab of Software Development Environment Department of Computer Science, Beihang University 01


  1. ITS 2018 SARLR: Self-adaptive Recommendation of Learning Resources Authors: Liping Liu, Wenjun Wu and Jiankun Huang Institution: State Key Lab of Software Development Environment Department of Computer Science, Beihang University

  2. 01 Introduction Self-Adaptive 02 Content Recommendation 03 Experiments 04 Conclusions

  3. 01 Introduction

  4. Introduction s

  5. 1. Introduction Rule-based Data-driven Recommendation Recommendation  Require domain experts to evaluate  Compare similarity among students learning scenarios and learning objects h  Define extensive recommendation  Be more scalable and general rules  Fail to consider the impact of  Only be applied in specific learning difficulty of learning objects and domains dynamic change

  6. 1. Introduction Contributions  SARLR, a novel learning recommendation algorithm  T-BMIRT, a temporal, multidimensional IRT-based model, incorporates the parameter of video learning  An evaluation strategy for recommendation algorithms in terms of rationality and effectiveness

  7. Self-Adaptive 02 Recommendation

  8. 2. Self-Adaptive Recommendation  The Overall architecture of the SARLR algorithm

  9. 2. Self-Adaptive Recommendation  IRT  T-IRT • 𝛽 : question discrimination 1 The Temporal IRT extend IRT model by • 𝛾 : question difficulty 𝑞 𝑡𝑟 = modeling the student’s knowledge state over 1 + 𝑓𝑦𝑞[−(𝛽 𝑟 𝜄 𝑡 − 𝛾 𝑟 )] • 𝜄 : student’s ability time as a Wiener process 1.2 𝑄 𝜄 𝑢+τ 𝜄 𝑢 = 𝜚 𝜄 𝑢 ,𝜑 2 𝜐 𝜄 𝑢+τ 𝜄 𝑢+𝜐 − 𝜄 𝑢 ~𝑂(𝜄 𝑢 , 𝑤 2 𝜐) Probability of corrent response 1 0.8 0.6 𝑄 𝜄 𝑢+τ 𝜄 𝑢 = 𝜚 𝜄 𝑢 ,𝜑 2 𝜐 𝜄 𝑢+τ 0.4 0.2 0 -6 -4 -2 0 2 4 6 Student ability Item Characteristic Curve(ICC)

  10. 2. Self-Adaptive Recommendation  T-BMIRT 𝑄 𝜄 𝑡,𝑢+τ 𝜄 𝑡,𝑢 , 𝑚 𝑡,𝑢 ,𝜑 2 𝜐 𝑚 𝑡,𝑢 = 𝜚 𝜄 𝑡,𝑢 + 𝜄 𝑡,𝑢+τ 1.2 𝑒 𝑡 𝑢 1 𝑚 𝑡,𝑢 = ∙ 𝑕 𝑢 ∙ 1 𝑒 𝑢 𝜄 𝑡,𝑢 𝜄 𝑡,𝑢 ∙ ℎ 𝑢 1 + 𝑓𝑦𝑞 − − ℎ 𝑢 0.8 ℎ 𝑢 Skill 2 𝑄 𝜄 𝑢+τ 𝜄 𝑢 = 𝜚 𝜄 𝑢 ,𝜑 2 𝜐 𝜄 𝑢+τ 0.6 𝑚 𝑡,𝑢 : the knowledge that student 𝑡 gains from the video 𝑢 0.4 𝜄 𝑡,𝑢 ∙ ℎ 𝑢 − ℎ 𝑢 𝑕 𝑢 : the knowledge of the video 𝑢 ℎ 𝑢 0.2 ℎ 𝑢 ℎ 𝑢 : is the prerequisites of video 𝑢 0 0 0.2 0.4 0.6 0.8 1 1.2 𝑒 𝑡 𝑢 is the duration in which student 𝑡 watches video 𝑢 Skill 1 𝑒 𝑢 is the total length of the video 𝑢 We use vector projection method to get the value that student’s ability exceed the video requirements.

  11. 2. Self-Adaptive Recommendation  Search and Extraction SARLR Phase 1: Search and Extraction  INPUT : … • Set of students 𝑇 = {𝑡 1 , 𝑡 2 , … , 𝑡 𝑜 } , target student 𝑡 𝑌 ∈ 𝑇 Video n Assessment n Video 1 Assessment 1 Video 2 • Matrix of abilities 𝐵 = [𝜄 𝑡,𝑢 ] , where 𝜄 𝑡,𝑢 is the ability value of student s at time t • Set of learning resources 𝐹 = {𝑓 1 , 𝑓 2 , … , 𝑓 𝑛 } 10  OUTPUT : learning path 𝑞 9 1: search for similar students MS, where 𝑡 𝑙 ∈ 𝑁𝑇 8 and 𝜄 𝑡 𝑙 ,𝑢 0 is similar to 𝜄 𝑡 𝑌 ,𝑢 0 7 2: for each 𝑡 𝑗 ∈ 𝑁𝑇 do 6 Skill 2 5 find 𝑡 𝑐 = 𝑏𝑠𝑕𝑛𝑏𝑦(𝑒𝑗𝑡𝑢𝑏𝑜𝑑𝑓(𝜄 𝑡 𝑗 ,𝑈 𝑡𝑗 − 𝜄 𝑡 𝑗 ,𝑢 0 )) , 3: where 𝑈 𝑡 𝑗 is the time of 𝑡 𝑗 completing learning 4 3 4: end for 2 5: extract the learning path 𝑞 = (𝑓 𝑗 1 , 𝑓 𝑗 2 , … 𝑓 𝑗 𝑈 ) of 𝑡 𝑐 1 6: return 𝑞 0 0 1 2 3 4 5 6 7 8 Skill 1

  12. 2. Self-Adaptive Recommendation  Adaptive Adjustment  INPUT : 1 𝑞 𝑡𝑟 = • SARLR Phase 2: Adaptive Re-planning Set of students 𝑇 = {𝑡 1 , 𝑡 2 , … , 𝑡 𝑜 } , target student 𝑡 𝑌 ∈ 𝑇 1 + 𝑓𝑦𝑞 − 𝜄 𝑡,𝑗 ∙ 𝛽 𝑟 − 𝑐 𝑟 • Matrix of abilities 𝐵 = [𝜄 𝑡,𝑢 ] , where 𝜄 𝑡,𝑢 is the ability value of student s at time t  INPUT: 1 • Set of learning resources 𝐹 = {𝑓 1 , 𝑓 2 , … , 𝑓 𝑛 } 𝑞 𝑡𝑓 = • Target student 𝑡 𝑌 , recommended learning path 𝑞 = (𝑓 𝑗 1 , 𝑓 𝑗 2 , … 𝑓 𝑗 𝑈 ) 𝜄 𝑡,𝑗 ∙ ℎ 𝑓  OUTPUT : learning path 𝑞 • Result of 𝑡 𝑌 interacted with learning resources in 𝑞 1 + 𝑓𝑦𝑞 − − ℎ 𝑓 1: search for similar students MS, where 𝑡 𝑙 ∈ 𝑁𝑇 and 𝜄 𝑡 𝑙 ,𝑢 0 is similar to 𝜄 𝑡 𝑌 ,𝑢 0 ℎ 𝑓  OUTPUT: new learning path 2: for each 𝑡 𝑗 ∈ 𝑁𝑇 do 1 : for each 𝑓 ∈ 𝑞 do find 𝑡 𝑐 = 𝑏𝑠𝑕𝑛𝑏𝑦(𝑒𝑗𝑡𝑢𝑏𝑜𝑑𝑓(𝜄 𝑡 𝑗 ,𝑈 𝑡𝑗 − 𝜄 𝑡 𝑗 ,𝑢 0 )) , where 𝑈 𝑡 𝑗 is the time of 𝑡 𝑗 completing learning 3: 2 : if 𝑓 is a video and 𝑞 𝑡𝑓 < 𝐷 𝑡𝑓 do 4: end for 𝑞 𝑡𝑟 : the probability of student 𝑡 correctly answering return SARLR Phase 1 to re-plan path 𝑞 3 : 5: extract the learning path 𝑞 = (𝑓 𝑗 1 , 𝑓 𝑗 2 , … 𝑓 𝑗 𝑈 ) of 𝑡 𝑐 exercise 𝑟 4 : else if 𝑓 is an exercise and 𝑡 𝑌 failed it and 𝑞 𝑡𝑟 < 𝐷 𝑡𝑟 do 6: return 𝑞 𝑞 𝑡𝑓 : the degree of knowledge that student 𝑡 can 5 : return SARLR Phase 1 to re-plan path p 6 : end if acquire from the video 𝑓 7 : end for

  13. 03 Experiments

  14. 3. Experiments  Datasets  A publicly accessible data set • Assistments Math 2004-2005 • From Assistment online platform • Including 224,076 interactions, 860 students, 1,427 assessments and 106 skills  A proprietary data set • blended learning data • From our blending learning analysis platform • Including 14,037,146 learning behavior data from 140 schools and 9 online educational companies

  15. 3. Experiments  Experiments for T-BMIRT • Frequency method : predict the student Assistments Blended learning data correctly answer the assessment when his One-dimensional Multidimensional One-dimensional Multidimensional Models history correct rate is greater than 50%. ACC AUC ACC AUC ACC AUC ACC AUC • IRT : two-parameter ogive model. Frequency method 0.694 N/A 0.683 N/A 0.702 N/A 0.688 N/A • MIRT : multidimensional item response. IRT 0.716 0.779 0.701 0.758 0.721 0.784 0.706 0.752 • T-IRT : temporal IRT with 𝜑 = 0.5 , which were selected in exploratory experiments. MIRT 0.714 0.771 0.721 0.786 0.718 0.775 0.722 0.783 • T-BMIRT : temporal blended T-IRT 0.738 0.805 0.712 0.769 0.744 0.801 0.717 0.764 multidimensional IRT with 𝜑 = 0.15 and T-BMIRT 0.743 0.815 0.738 0.803 0.757 0.820 0.748 0.816 α = 10 −4 .

  16. 3. Experiments  Rationality Evaluation • 𝑓 𝑗 ∈ 𝑞 : the learning resources in a recommended path, 𝑛 is the length of the path 𝑞 𝑡𝑗𝑛𝑗𝑚𝑏𝑠𝑗𝑢𝑧(ℎ 𝑓 𝑗 , 𝐿𝐷 𝑡 𝑦 ) 𝑓 𝑗 • 𝐿𝐷 𝑡 𝑦 : the knowledge components which 𝑡 𝑦 is learning in RC s x = 𝑛 the current chapter 𝑞 𝑡𝑗𝑛𝑗𝑚𝑏𝑠𝑗𝑢𝑧(ℎ 𝑓 𝑗 , 𝜄 𝑡 𝑦,𝑗 ) 𝑓 𝑗 DC s x = • similarity() : the adjusted cosine similarity of the two 𝑛 vectors in the parentheses. Model Relevance accuracy Difficulty accuracy UCF 0.86 0.77 ICF 0.71 0.83 LFM 0.87 0.84 SARLR 0.97 0.92

  17. 3. Experiments  Effectiveness Evaluation 𝑇 ′ : the students whose learning paths are strictly • recommended • 𝐻 = 𝐹 𝑆 𝑇 ′ − 𝐹 𝑆 𝑇 𝑇 the students whose learning path are randomly selected • and 𝐹 𝑆 𝑇 : the students’ average score in the last 𝐹 𝑆 𝑇 ′ 𝐹 𝑆 𝑇 online assessment. Expected gain Model 1 2 3 4 5 6 UCF -0.04 -0.06 0.07 -0.03 0.08 0.01 ICF 0.05 0.04 -0.03 0.07 -0.02 0.05 LFM 0.04 0.12 0.09 0.10 0.03 -0.05 SARLR 0.11 0.27 0.24 0.23 0.17 0.06

  18. 04 Conclusions

  19. 4. Conclusions Establishes conditions to adaptively adjust recommendations towards the dynamic needs of the students Adaptively Evaluation T-BMIRT Strategy criteria Performs well on the prediction task of For personalized learning recommendation multi-dimensional skills assessments in terms of rationality and effectiveness

  20. THANKS

Recommend


More recommend