Task requirements (TRs) writing is an important question type in Key English Test and Preliminary English Test. A TR writing question may include multiple requirements and a high-quality essay must respond to each requirement thoroughly and accurately. However, the limited teacher resources prevent students from getting detailed grading instantly. The majority of existing automatic essay scoring systems focus on giving a holistic score but rarely provide reasons to support it. In this paper, we proposed an end-to-end framework based on machine reading comprehension (MRC) to address this problem to some extent. The framework not only detects whether an essay responds to a requirement question, but clearly marks where the essay answers the question. Our framework consists of three modules: question normalization module, ELECTRA based MRC module and response locating module. We extensively explore state-of-the-art MRC methods. Our approach achieves 0.93 accuracy score and 0.85 F1 score on a real-world educational dataset. To encourage reproducible results, we make our code publicly available at \url{https://github.com/aied2021TRMRC/AIED_2021_TRMRC_code}.
翻译:在关键英语测试和初步英语测试中,任务要求(TRs)写作是一个重要的问题类型。写作问题可能包括多重要求,高质量的论文必须彻底和准确地对每项要求做出回应。然而,有限的教师资源使学生无法立即得到详细的分级。现有的大多数自动作文评分系统都侧重于给予整体分数,但很少提供支持的理由。在本文件中,我们提出了一个基于机读理解(MRC)的端对端框架,以在某种程度上解决这一问题。这个框架不仅检测文章是否响应了要求问题,而且清楚地标出了文章回答问题的标记。我们的框架由三个模块组成:问题正常化模块、以MRC为主的ELECTRA模块和响应定位模块。我们广泛探索了“现代MRC”的状态方法。我们的方法达到了0.93的准确分数,在现实世界教育数据集中实现了0.85 F1分。为了鼓励重新取得成果,我们在\url{https://github.com/airied2021TRMRC/AIED_2021_TRC_code}。