Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs). They can manipulate the output of DNNs and possess high insidiousness. In the field of natural language processing, some attack methods have been proposed and achieve very high attack success rates on multiple popular models. Nevertheless, there are few studies on defending against textual backdoor attacks. In this paper, we propose a simple and effective textual backdoor defense named ONION, which is based on outlier word detection and, to the best of our knowledge, is the first method that can handle all the textual backdoor attack situations. Experiments demonstrate the effectiveness of our model in defending BiLSTM and BERT against five different backdoor attacks. All the code and data of this paper can be obtained at https://github.com/thunlp/ONION.
翻译:后门攻击是对深层神经网络的一种突发的培训时间威胁,它们可以操纵DNN的输出,并且具有高度的阴险性。在自然语言处理领域,提出了一些攻击方法,在多种流行模式中取得了很高的进攻成功率。然而,关于抵御文字后门攻击的研究很少。在本文中,我们建议一种简单而有效的文字后门防御,名为ONION,它以异词探测为基础,据我们所知,是能够处理所有文字后门攻击情况的第一个方法。实验表明,我们为BLSTM和BERT辩护的模型对于五种不同的后门攻击是有效的。本文的所有代码和数据都可以在https://github.com/thnlp/ONION上获得。