Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However, for short answer formats, automatic grading is challenging due to natural language ambiguity and versatility. While automatic short answer grading models are beginning to compare to human performance on some datasets, their robustness, especially to adversarially manipulated data, is questionable. Exploitable vulnerabilities in grading models can have far-reaching consequences ranging from cheating students receiving undeserved credit to undermining automatic grading altogether - even when most predictions are valid. In this paper, we devise a black-box adversarial attack tailored to the educational short answer grading scenario to investigate the grading models' robustness. In our attack, we insert adjectives and adverbs into natural places of incorrect student answers, fooling the model into predicting them as correct. We observed a loss of prediction accuracy between 10 and 22 percentage points using the state-of-the-art models BERT and T5. While our attack made answers appear less natural to humans in our experiments, it did not significantly increase the graders' suspicions of cheating. Based on our experiments, we provide recommendations for utilizing automatic grading systems more safely in practice.
翻译:特别是随着教育的日益数字化和对大规模标准化测试的兴趣的提高,自动评级的普及程度已经上升到商业解决方案可广泛获得和使用的程度。然而,对于简短的回答格式,自动评级具有挑战性,因为自然语言模糊性和多功能性。虽然自动简短的回答分级模式开始与某些数据集的人类性能进行比较,但其强健性,特别是受敌对操纵的数据,是值得怀疑的。分级模式中可分析的脆弱性可能产生深远的后果,从欺骗学生获得不应得到的信用,到彻底破坏自动定级 — — 即使大多数预测都是有效的。在本文中,我们设计了针对教育短期定级情景的黑盒对抗性攻击,以调查分级模式的稳健性。在我们的攻击中,我们向学生解答错误的自然场所插入了动和动词,欺骗模型的预测是正确的。我们观察到了10至22个百分点之间的预测准确性损失,利用了最先进的模型BERT和T5。我们的攻击性实验中的自动定级方法使我们的定级系统变得不那么自然。