We propose SAINT+, a successor of SAINT which is a Transformer based knowledge tracing model that separately processes exercise information and student response information. Following the architecture of SAINT, SAINT+ has an encoder-decoder structure where the encoder applies self-attention layers to a stream of exercise embeddings, and the decoder alternately applies self-attention layers and encoder-decoder attention layers to streams of response embeddings and encoder output. Moreover, SAINT+ incorporates two temporal feature embeddings into the response embeddings: elapsed time, the time taken for a student to answer, and lag time, the time interval between adjacent learning activities. We empirically evaluate the effectiveness of SAINT+ on EdNet, the largest publicly available benchmark dataset in the education domain. Experimental results show that SAINT+ achieves state-of-the-art performance in knowledge tracing with an improvement of 1.25% in area under receiver operating characteristic curve compared to SAINT, the current state-of-the-art model in EdNet dataset.
翻译:我们提出SAINT+,这是以变换为基础的知识追踪模型,可以单独处理信息和学生反应信息。在SAINT的架构下,SAINT+有一个编码器解码器结构,编码器将自我注意层应用到运动嵌入流中,解码器将自我注意层和编码解码器分解层轮流应用到反应嵌入和编码输出的流中。此外,SAINT+在反应嵌入中包含两个时间特征:过时的时间、学生回答的时间和相邻学习活动之间的滞后时间间隔。我们从经验上评估了在教育领域最大的公开基准数据集EdNet上SAINT+的有效性。实验结果显示,SAINT+在接收器下运行特征曲线的区域内实现了最新的知识追踪业绩,与EdNet数据集中的当前状态模型SAINT相比,在接收器下运行特征曲线的区域内实现了1.25%的改进。