Designing dialog tutors has been challenging as it involves modeling the diverse and complex pedagogical strategies employed by human tutors. Although there have been significant recent advances in neural conversational systems using large language models and growth in available dialog corpora, dialog tutoring has largely remained unaffected by these advances. In this paper, we rigorously analyze various generative language models on two dialog tutoring datasets for language learning using automatic and human evaluations to understand the new opportunities brought by these advances as well as the challenges we must overcome to build models that would be usable in real educational settings. We find that although current approaches can model tutoring in constrained learning scenarios when the number of concepts to be taught and possible teacher strategies are small, they perform poorly in less constrained scenarios. Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring, which measures learning opportunities for students and how engaging the dialog is. To understand the behavior of our models in a real tutoring setting, we conduct a user study using expert annotators and find a significantly large number of model reasoning errors in 45% of conversations. Finally, we connect our findings to outline future work.
翻译:设计对话导师一直具有挑战性,因为它涉及模拟人类导师采用的各种复杂教学战略。虽然最近使用大语言模型的神经对话系统取得了显著进步,而且现有对话公司也取得了增长,但对话导师基本上没有受到这些进展的影响。在本文件中,我们严格分析两个对话导师数据集中的各种基因化语言模型,以便使用自动和人文评价来进行语言学习,以了解这些进步带来的新机会,以及我们必须克服的挑战,以建立在实际教育环境中使用的模式。我们发现,虽然目前的方法可以模拟在有限的学习情景中进行辅导,但当拟教授的概念和可能的教师战略数量很小时,它们表现不佳。我们的人性评价显示,模型和地面说明在公平辅导方面表现不佳,衡量学生学习机会和如何参与对话。为了理解我们模型在实际辅导环境中的行为,我们使用专家导师进行用户研究,发现在45%的谈话中发现大量模型推理错误。最后,我们将我们的调查结果与未来工作的概要联系起来。