In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits. This task is rather challenging due to the widely-varying quality and pedagogical styles of dialogic instructions. To address these challenges, we utilize pre-trained language models, and propose a multi-task paradigm which enhances the ability to distinguish instances of different classes by enlarging the margin between categories via contrastive loss. Furthermore, we design a strategy to fully exploit the misclassified examples during the training stage. Extensive experiments on a real-world online educational data set demonstrate that our approach achieves superior performance compared to representative baselines. To encourage reproducible results, we make our implementation online available at \url{https://github.com/AIED2021/multitask-dialogic-instruction}.
翻译:在这项工作中,我们研究了检测在线对话指令的计算方法,这些指令被广泛用于帮助学生了解学习材料,并建立有效的学习习惯。由于对话指令的质量和教学风格差异很大,这项任务具有相当大的挑战性。为了应对这些挑战,我们使用了预先培训的语言模式,并提出了一个多任务模式,通过对比性损失扩大不同类别之间的差值,从而增强区分不同类别案例的能力。此外,我们还设计了在培训阶段充分利用错误分类实例的战略。关于真实世界在线教育数据集的广泛实验表明,我们的方法比有代表性的基线取得了优异的成绩。为了鼓励可复制的成果,我们在以下网站公布了我们的执行情况:\url{https://github.com/AIED20211/multittask-dialogic-Instruction}。