Deep neural networks (DNNs) have progressed rapidly during the past decade and have been deployed in various real-world applications. Meanwhile, DNN models have been shown to be vulnerable to security and privacy attacks. One such attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model's training set to mislead any input with an added secret trigger to a target class. Previous backdoor attacks predominantly focus on computer vision (CV) applications, such as image classification. In this paper, we perform a systematic investigation of backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods. Specifically, we propose three methods to construct triggers, namely BadChar, BadWord, and BadSentence, including basic and semantic-preserving variants. Our attacks achieve an almost perfect attack success rate with a negligible effect on the original model's utility. For instance, using the BadChar, our backdoor attack achieves a 98.9% attack success rate with yielding a utility improvement of 1.5% on the SST-5 dataset when only poisoning 3% of the original set. Moreover, we conduct a user study to prove that our triggers can well preserve the semantics from humans perspective.
翻译:深心神经网络(DNN)在过去十年中进展迅速,并被部署在各种现实世界应用中。 同时,DNN模型被证明容易受到安全和隐私攻击。 最近引起极大关注的这种攻击之一是后门攻击。 具体地说, 对手对目标模型的培训施以毒药, 目的是用附加的秘密触发器来误导任何输入。 以前的后门攻击主要集中于计算机视觉(CV)应用程序, 如图像分类。 在本文中, 我们对NLP模型的后门攻击进行了系统调查, 并提出了BadNL(BadNL), 一个通用NLP后门攻击框架, 包括新的攻击方法。 具体地说, 我们提出了三种制造触发器的方法, 即BadChar、BadWord和BadSentence, 包括基本和语义性保护变体。 我们的攻击取得了几乎完美的攻击成功率, 对原始模型的用途影响很小。 例如,我们使用BadChar, 我们的后门攻击达到了98.9 % 攻击成功率, 使得我们最初的SST-5 系统能改进了1.5%的用户的状态。