Despite significant improvements in natural language understanding models with the advent of models like BERT and XLNet, these neural-network based classifiers are vulnerable to blackbox adversarial attacks, where the attacker is only allowed to query the target model outputs. We add two more realistic restrictions on the attack methods, namely limiting the number of queries allowed (query budget) and crafting attacks that easily transfer across different pre-trained models (transferability), which render previous attack models impractical and ineffective. Here, we propose a target model agnostic adversarial attack method with a high degree of attack transferability across the attacked models. Our empirical studies show that in comparison to baseline methods, our method generates highly transferable adversarial sentences under the restriction of limited query budgets.
翻译:尽管随着BERT和XLNet等模型的出现,自然语言理解模式有了显著改善,但这些神经网络分类器很容易受到黑盒对抗性攻击,攻击者只能对目标模式产出进行询问。我们对攻击方法增加了两个更现实的限制,即限制允许查询的数量(询问预算)和设计攻击,这些攻击很容易地跨越不同的预先培训模式(可转让性),这使得以前的攻击模式不切实际和无效。在这里,我们提出了一个目标模型,即攻击者对抗性攻击方法,在攻击模式之间具有高度的可转移性。我们的经验研究表明,与基线方法相比,我们的方法在有限的查询预算限制下产生了高度可转让的对抗性判决。