This work compares two approaches to provide metacognitive interventions and their impact on preparing students for future learning across Intelligent Tutoring Systems (ITSs). In two consecutive semesters, we conducted two classroom experiments: Exp. 1 used a classic artificial intelligence approach to classify students into different metacognitive groups and provide static interventions based on their classified groups. In Exp. 2, we leveraged Deep Reinforcement Learning (DRL) to provide adaptive interventions that consider the dynamic changes in the student's metacognitive levels. In both experiments, students received these interventions that taught how and when to use a backward-chaining (BC) strategy on a logic tutor that supports a default forward-chaining strategy. Six weeks later, we trained students on a probability tutor that only supports BC without interventions. Our results show that adaptive DRL-based interventions closed the metacognitive skills gap between students. In contrast, static classifier-based interventions only benefited a subset of students who knew how to use BC in advance. Additionally, our DRL agent prepared the experimental students for future learning by significantly surpassing their control peers on both ITSs.
翻译:这项工作比较了两种提供元认知干预的方法及其对智能教学系统(IITS)中学生未来学习准备的影响。在两个连续的学期中,我们进行了两项课堂实验:Exp。1采用经典的人工智能方法将学生分类为不同的元认知组,并根据他们分类的组提供静态干预。在Exp。2中,我们利用深度强化学习(DRL)来提供自适应干预,考虑到学生元认知水平的动态变化。在两个实验中,学生都接受了这些干预,教授如何和何时在逻辑导师中使用反向链(BC)策略,该导师支持默认的正向链策略。六周后,我们在一个只支持BC而没有干预的概率导师上对学生进行了培训。我们的结果显示,自适应的基于DRL的干预能够弥补学生的元认知技能差距。相反,基于静态分类器的干预只有一部分学生能够获益,这些学生在事先知道如何使用BC的情况下。此外,我们的DRL智能体通过两个ITS显著超过了实验组的控制组,为实验学生未来的学习做了准备。