We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models that produces fluent and grammatical adversarial contexts while maintaining gold answers. Despite phenomenal progress on general adversarial attacks, few works have investigated the vulnerability and attack specifically for QA models. In this work, we first explore the biases in the existing models and discover that they mainly rely on keyword matching between the question and context, and ignore the relevant contextual relations for answer prediction. Based on two biases above, TASA attacks the target model in two folds: (1) lowering the model's confidence on the gold answer with a perturbed answer sentence; (2) misguiding the model towards a wrong answer with a distracting answer sentence. Equipped with designed beam search and filtering methods, TASA can generate more effective attacks than existing textual attack methods while sustaining the quality of contexts, in extensive experiments on five QA datasets and human evaluations.
翻译:我们提出双回答句攻击(TASA)模式,这是一种对质回答(QA)模式的对抗性攻击方法,它产生流利和语法对抗背景,同时保持金质回答;尽管在一般对抗性攻击上取得了惊人的进展,但很少有作品专门调查QA模式的脆弱性和攻击;在这项工作中,我们首先探索现有模式的偏差,发现它们主要依赖关键词对准问题和上下文,忽视相关背景关系来进行回答预测。基于以上两个偏差,TASA将目标模型分为两个折号:1)用一个过激的回答句子降低模型对金质回答的信心;2)用一个引人注意的回答句子误导模型走向错误答案。根据设计成的光束搜索和过滤方法,TASA可以产生比现有文字攻击方法更有效的攻击,同时保持背景质量,在五个QA数据集和人类评价的广泛试验中,在五个QA数据集和人类评价方面进行广泛的实验。