Standard accuracy metrics have shown that Math Word Problem (MWP) solvers have achieved high performance on benchmark datasets. However, the extent to which existing MWP solvers truly understand language and its relation with numbers is still unclear. In this paper, we generate adversarial attacks to evaluate the robustness of state-of-the-art MWP solvers. We propose two methods Question Reordering and Sentence Paraphrasing to generate adversarial attacks. We conduct experiments across three neural MWP solvers over two benchmark datasets. On average, our attack method is able to reduce the accuracy of MWP solvers by over 40 percentage points on these datasets. Our results demonstrate that existing MWP solvers are sensitive to linguistic variations in the problem text. We verify the validity and quality of generated adversarial examples through human evaluation.
翻译:标准准确度指标显示,数学文字问题解答者在基准数据集上取得了很高的成绩,然而,现有的数学文字问题解答者真正理解语言的程度及其与数字的关系仍然不清楚。在本文件中,我们产生了对抗性攻击,以评价最先进的数学文字问题解答者是否可靠。我们提出了两种方法,即问题重新排序和判刑参数,以产生对抗性攻击。我们通过两个基准数据集对三个神经神经多元问题解答者进行实验。平均而言,我们的攻击方法能够使这些数据集的MWP解答者准确性降低40个百分点以上。我们的结果表明,现有的MWP解答者对问题文本的语言差异十分敏感。我们通过人文评估来核查生成的对抗性实例的有效性和质量。