Knowledge tracing (KT) models are a popular approach for predicting students' future performance at practice problems using their prior attempts. Though many innovations have been made in KT, most models including the state-of-the-art Deep KT (DKT) mainly leverage each student's response either as correct or incorrect, ignoring its content. In this work, we propose Code-based Deep Knowledge Tracing (Code-DKT), a model that uses an attention mechanism to automatically extract and select domain-specific code features to extend DKT. We compared the effectiveness of Code-DKT against Bayesian and Deep Knowledge Tracing (BKT and DKT) on a dataset from a class of 50 students attempting to solve 5 introductory programming assignments. Our results show that Code-DKT consistently outperforms DKT by 3.07-4.00% AUC across the 5 assignments, a comparable improvement to other state-of-the-art domain-general KT models over DKT. Finally, we analyze problem-specific performance through a set of case studies for one assignment to demonstrate when and how code features improve Code-DKT's predictions.
翻译:知识追踪模式(KT)是预测学生未来在实践问题方面的成绩的流行方法。虽然在KT中已经做了许多创新,但大多数模式包括最新的深KT(DKT),主要将每个学生的反应作为正确或不正确的杠杆,忽视其内容。在这项工作中,我们提议基于代码的深知识追踪(Code-DKT)模式(Code-DKT),该模式使用一种关注机制自动提取和选择特定域码特征以扩展DKT。我们比较了代码-DKT相对于巴伊西亚人和深知识追踪(BKT和DKT)在50名学生试图解决5项介绍性方案任务数据集上的有效性。我们的结果表明,代码-DKT在5项任务中始终比DKT高出3.07-4.00 % ACUC,这与其他最先进的域通用KT模型相类似的改进。最后,我们通过一系列案例研究分析问题的具体表现,以演示代码在何时和如何改进代码-DKT预测。