Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural Language Processing (NLP), DNNs are often backdoored during the fine-tuning process of a large-scale Pre-trained Language Model (PLM) with poisoned samples. Although the clean weights of PLMs are readily available, existing methods have ignored this information in defending NLP models against backdoor attacks. In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models. Specifically, we leverage the clean pre-trained weights via two complementary techniques: (1) a two-step Fine-mixing technique, which first mixes the backdoored weights (fine-tuned on poisoned data) with the pre-trained weights, then fine-tunes the mixed weights on a small subset of clean data; (2) an Embedding Purification (E-PUR) technique, which mitigates potential backdoors existing in the word embeddings. We compare Fine-mixing with typical backdoor mitigation methods on three single-sentence sentiment classification tasks and two sentence-pair classification tasks and show that it outperforms the baselines by a considerable margin in all scenarios. We also show that our E-PUR method can benefit existing mitigation methods. Our work establishes a simple but strong baseline defense for secure fine-tuned NLP models against backdoor attacks.
翻译:众所周知,深神经网络(DNN)很容易受到后门攻击。在自然语言处理(NLP)中,DNN经常在大规模预先训练语言模型(PLM)的微调过程中被后门封闭。尽管PLM的清洁重量随时可得,但现有方法忽视了这一信息,以捍卫NLP模型,防止后门攻击。在这项工作中,我们迈出了第一步,利用预先训练(未调整)的重量,在微调的语言模型中减少后门。具体地说,我们利用两种辅助技术来利用清洁的事先训练的重量:(1)两步修补精密技术,首先将后门重量(对有毒数据的调整)与预先训练重量混在一起,然后微调对一小组清洁数据的混合权重;(2) Embided 纯化(E-PUR)技术,该技术在精细调的语言模型中可以减少后门的后门。我们把精练后门模型与典型的后门模型作比较,在三次后门的后端防御模型中,我们还可以用三根底线的基线分类方法来显示我们现有的安全防御方法。