Recent approaches have exploited weaknesses in monolingual question answering (QA) models by adding adversarial statements to the passage. These attacks caused a reduction in state-of-the-art performance by almost 50%. In this paper, we are the first to explore and successfully attack a multilingual QA (MLQA) system pre-trained on multilingual BERT using several attack strategies for the adversarial statement reducing performance by as much as 85%. We show that the model gives priority to English and the language of the question regardless of the other languages in the QA pair. Further, we also show that adding our attack strategies during training helps alleviate the attacks.
翻译:最近的做法利用了单一语言问题解答模式中的弱点,在段落中增加了对抗性言论,这些袭击导致最新业绩下降近50%,在本文中,我们是第一个探索和成功打击多语种问题解答(MLQA)系统(MLQA)系统(MLQA)的人,在多语种问题解答中,我们使用数种攻击策略对多语种问题解答(QA)系统进行了预先培训,使工作表现下降高达85%。我们表明,该模式优先考虑英语和问题语言,而不论对口语中的其他语言。此外,我们还表明,在培训期间增加攻击策略有助于缓解袭击。