Knowledge tracing allows Intelligent Tutoring Systems to infer which topics or skills a student has mastered, thus adjusting curriculum accordingly. Deep Learning based models like Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN) have achieved significant improvements compared with models like Bayesian Knowledge Tracing (BKT) and Performance Factors Analysis (PFA). However, these deep learning based models are not as interpretable as other models because the decision-making process learned by deep neural networks is not wholly understood by the research community. In previous work, we critically examined the DKT model, visualizing and analyzing the behaviors of DKT in high dimensional space. In this work, we extend our original analyses with a much larger dataset and add discussions about the memory states of the DKVMN model. We discover that Deep Knowledge Tracing has some critical pitfalls: 1) instead of tracking each skill through time, DKT is more likely to learn an `ability' model; 2) the recurrent nature of DKT reinforces irrelevant information that it uses during the tracking task; 3) an untrained recurrent network can achieve similar results to a trained DKT model, supporting a conclusion that recurrence relations are not properly learned and, instead, improvements are simply a benefit of projection into a high dimensional, sparse vector space. Based on these observations, we propose improvements and future directions for conducting knowledge tracing research using deep neural network models.
翻译:与Bayesian知识追踪(BKT)和绩效因素分析(PFA)等模型相比,这些深层次的学习基础模型与其它模型相比取得了显著的改进。 然而,这些深层次的学习基础模型不像其他模型那样可以解释,因为深层神经网络所学的决策过程并非为研究界所完全理解的。在以往的工作中,我们严格地审查了DKT模型,对DKT在高空间的行为进行了视觉化和分析。在这项工作中,我们用一个大得多的数据集扩展了我们最初的分析,并增加了关于DKVMN模型记忆状态的讨论。我们发现,深层知识追踪有一些关键的缺陷:(1) 而不是通过时间跟踪每一种技能,DKT更有可能学习“可操作性”模型;(2) DKT的经常性性质加强了其在跟踪任务中使用的不相干的信息;(3) 支持不训练有素的经常性网络的改进,而不是用一个经过培训的DKMMM的更新的模型,能够正确地在高层次上进行空间跟踪。