This paper surveys and organizes research works in an under-studied area, which we call automated evaluation for student argumentative writing. Unlike traditional automated writing evaluation that focuses on holistic essay scoring, this field is more specific: it focuses on evaluating argumentative essays and offers specific feedback, including argumentation structures, argument strength trait score, etc. The focused and detailed evaluation is useful for helping students acquire important argumentation skill. In this paper we organize existing works around tasks, data and methods. We further experiment with BERT on representative datasets, aiming to provide up-to-date baselines for this field.
翻译:本文在研究不足的领域调查和组织研究工作,我们称之为学生辩论写作的自动评估。与侧重于整体作文评分的传统自动写作评估不同,这个领域更为具体:它侧重于评估辩论作文并提供具体反馈,包括论证结构、论据强度特征评分等。 重点和详细的评估有助于帮助学生获得重要的论证技能。 在本文中,我们围绕任务、数据和方法组织现有的工作。我们进一步与生物和生物伦理学专家小组就代表性数据集进行实验,目的是为该领域提供最新的基线。