Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective---it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving---it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient---it generates adversarial text with computational complexity linear to the text length.
翻译:机器学习算法往往易受原始对应方无法察觉的改变的对抗性实例的影响,但能够愚弄最先进的模型。通过揭露恶意设计的对抗性实例,评估或甚至提高这些模型的稳健性是有益的。在本文中,我们介绍了TextFooler,这是产生自然对抗性文字的简单而有力的基线。我们将其应用于两种基本的自然语言任务,即文字分类和文字要求,成功地攻击了三个目标模型,包括强大的预先培训的BERT,以及广泛使用的神经网络和经常使用的神经网络。我们从三个方面展示了这一框架的优势:(1) 有效-It在成功率和扰动率方面优于艺术状态的攻击,(2) 实用-保存-它保存的语义内容和语义性,并且仍然被人类正确分类,(3) 高效-它产生具有计算复杂性的对抗性文字,直线线线。