Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success rates or fail to search efficiently in the exponentially large perturbation space. We propose an efficient and effective framework SemAttack to generate natural adversarial text by constructing different semantic perturbation functions. In particular, SemAttack optimizes the generated perturbations constrained on generic semantic spaces, including typo space, knowledge space (e.g., WordNet), contextualized semantic space (e.g., the embedding space of BERT clusterings), or the combination of these spaces. Thus, the generated adversarial texts are more semantically close to the original inputs. Extensive experiments reveal that state-of-the-art (SOTA) large-scale LMs (e.g., DeBERTa-v2) and defense strategies (e.g., FreeLB) are still vulnerable to SemAttack. We further demonstrate that SemAttack is general and able to generate natural adversarial texts for different languages (e.g., English and Chinese) with high attack success rates. Human evaluations also confirm that our generated adversarial texts are natural and barely affect human performance. Our code is publicly available at https://github.com/AI-secure/SemAttack.
翻译:最近的研究显示,受过训练的语言模型(LMS)很容易受到文字对抗性攻击,但是,现有的攻击方法要么受到攻击成功率低的打击率,要么未能在极大扰动空间中有效搜索。我们建议建立一个高效和有效的SemAttack框架,通过建立不同的语义性扰动功能产生自然对抗文字。特别是,SemAttack优化了在通用语义空间,包括打字空间、知识空间(如WordNet)和背景化语义空间(如BERT集群嵌入空间)上产生的扰动障碍,或者这些空间的结合。因此,产生的对抗性文字在语义上更加接近原始输入。广泛的实验显示,目前状态(SOTA)大型语言(如DeBERTA-v2)和防御战略(如FreeLB)都仍然易受SemAttack的伤害。我们进一步表明,SemAttack是一般的,能够生成自然对抗性对立性文字,A.在不同的语言上也很难确认我们所制作的英语和人类攻击率。