Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective---it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving---it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient---it generates adversarial text with computational complexity linear to the text length. *The code, pre-trained target models, and test examples are available at https://github.com/jind11/TextFooler.
翻译:机器学习算法往往易受原始对应方无法察觉的改变的对抗性实例的影响,但能够愚弄最先进的模型。通过揭露恶意制造的对抗性实例,评估或甚至提高这些模型的稳健性是有益的。在本文中,我们介绍了TextFooler,这是产生自然对抗性文字的简单而有力的基线。我们将其应用于两种基本的自然语言任务,即文本分类和文字要求,成功地攻击了三个目标模型,包括强大的预先培训的BERT以及广泛使用的神经网络和经常使用的神经网络。我们以三种方式展示了这一框架的优势:(1) 有效-It优于艺术状态攻击的成功率和扰动率,(2) 实用-保存-它保存的语义内容和语义性,并且仍然被人类正确分类,(3) 高效-它产生具有计算复杂性的对抗性文本线性。 * 代码、预先培训的目标模型和测试示例见https://giub/Fjin-tro。