Knowledge tracing (KT) is the problem of predicting students' future performance based on their historical interactions with intelligent tutoring systems. Recent studies have applied multiple types of deep neural networks to solve the KT problem. However, there are two important factors in real-world educational data that are not well represented. First, most existing works augment input representations with the co-occurrence matrix of questions and knowledge components\footnote{\label{ft:kc}A KC is a generalization of everyday terms like concept, principle, fact, or skill.} (KCs) but fail to explicitly integrate such intrinsic relations into the final response prediction task. Second, the individualized historical performance of students has not been well captured. In this paper, we proposed \emph{AT-DKT} to improve the prediction performance of the original deep knowledge tracing model with two auxiliary learning tasks, i.e., \emph{question tagging (QT) prediction task} and \emph{individualized prior knowledge (IK) prediction task}. Specifically, the QT task helps learn better question representations by predicting whether questions contain specific KCs. The IK task captures students' global historical performance by progressively predicting student-level prior knowledge that is hidden in students' historical learning interactions. We conduct comprehensive experiments on three real-world educational datasets and compare the proposed approach to both deep sequential KT models and non-sequential models. Experimental results show that \emph{AT-DKT} outperforms all sequential models with more than 0.9\% improvements of AUC for all datasets, and is almost the second best compared to non-sequential models. Furthermore, we conduct both ablation studies and quantitative analysis to show the effectiveness of auxiliary tasks and the superior prediction outcomes of \emph{AT-DKT}.
翻译:知识追踪( KT) 是根据学生与智能导师系统的历史互动来预测学生未来表现的问题。 最近的研究应用了多种深层神经网络来解决 KT 问题。 然而, 现实世界教育数据中有两个没有很好体现的重要因素 。 首先, 多数现有作品会通过两个辅助学习任务来提高原始深层次知识追踪模型的预测性能, 即 \ emph{ 问题( QT) 预测第二任务) 和 memph{ 预测前技能。 ( KCs) 但是没有明确地将这种内在关系纳入最终反应预测任务。 其次, 学生的个体化历史表现还没有很好地被捕捉到。 我们建议用两个辅助学习任务来提高原始深层次知识追踪模型的预测性能, 即 e.,\ emph{ 标注( QT) 预测第二任务, 和 模拟( IK) 预测前期任务。 QT 任务有助于学习更好的问题解答, 我们用历史水平分析所有KC 之前的数据分析 。