Deep neural networks (DNNs) and natural language processing (NLP) systems have developed rapidly and have been widely used in various real-world fields. However, they have been shown to be vulnerable to backdoor attacks. Specifically, the adversary injects a backdoor into the model during the training phase, so that input samples with backdoor triggers are classified as the target class. Some attacks have achieved high attack success rates on the pre-trained language models (LMs), but there have yet to be effective defense methods. In this work, we propose a defense method based on deep model mutation testing. Our main justification is that backdoor samples are much more robust than clean samples if we impose random mutations on the LMs and that backdoors are generalizable. We first confirm the effectiveness of model mutation testing in detecting backdoor samples and select the most appropriate mutation operators. We then systematically defend against three extensively studied backdoor attack levels (i.e., char-level, word-level, and sentence-level) by detecting backdoor samples. We also make the first attempt to defend against the latest style-level backdoor attacks. We evaluate our approach on three benchmark datasets (i.e., IMDB, Yelp, and AG news) and three style transfer datasets (i.e., SST-2, Hate-speech, and AG news). The extensive experimental results demonstrate that our approach can detect backdoor samples more efficiently and accurately than the three state-of-the-art defense approaches.
翻译:深神经网络(DNN)和自然语言处理(NLP)系统发展迅速,在现实世界的各个领域广泛使用。然而,事实证明它们很容易受到幕后攻击。具体地说,对手在培训阶段将一个后门输入模型,因此,带后门触发器的输入样本被归类为目标类别。一些攻击在经过训练的语言模型(LMS)上达到了高攻击成功率,但还没有有效的防御方法。在这项工作中,我们建议了一种基于深层模型突变测试的防御方法。我们的主要理由是,如果我们对LMS进行随机突变,而后门则可以普遍推广,后门样本比清洁样品强得多。我们首先确认模型突变测试在检测后门样品和选择最合适的变异操作器方面的有效性。然后,我们通过探测广泛的后门攻击等级(即,级别、字级、字级和句级)来系统防御三种深层次的后门攻击率。我们还第一次试图保护后门样品比干净的样品要强得多。我们还第一次试图防御最新的后门样品,如果我们在LMM(S)和后级上进行随机级的样品。我们的三个数据、SST和AG(我们用来衡量三个数据,我们的数据基准。我们用三个数据。我们用来衡量。