Due to the success of pre-trained models (PTMs), people usually fine-tune an existing PTM for downstream tasks. Most of PTMs are contributed and maintained by open sources and may suffer from backdoor attacks. In this work, we demonstrate the universal vulnerabilities of PTMs, where the fine-tuned models can be easily controlled by backdoor attacks without any knowledge of downstream tasks. Specifically, the attacker can add a simple pre-training task to restrict the output hidden states of the trigger instances to the pre-defined target embeddings, namely neuron-level backdoor attack (NeuBA). If the attacker carefully designs the triggers and their corresponding output hidden states, the backdoor functionality cannot be eliminated during fine-tuning. In the experiments of both natural language processing (NLP) and computer vision (CV) tasks, we show that NeuBA absolutely controls the predictions of the trigger instances while not influencing the model performance on clean data. Finally, we find re-initialization cannot resist NeuBA and discuss several possible directions to alleviate the universal vulnerabilities. Our findings sound a red alarm for the wide use of PTMs. Our source code and data can be accessed at \url{https://github.com/thunlp/NeuBA}.
翻译:由于培训前的模型(PTMs)的成功,人们通常会微调现有的PTM(PTM),用于下游任务。大多数PTM(PTM)是由开放源提供和维护的,并可能受到幕后攻击。在这项工作中,我们展示了PTM(PTM)的普遍脆弱性,在不知晓下游任务的情况下,微调模型很容易被幕后攻击控制。具体地说,攻击者可以增加一个简单的训练前任务,以限制预先确定的目标嵌入点的触发点的输出隐藏状态,即神经级后门攻击(NeuBA)。如果攻击者仔细设计触发点及其相应的输出隐藏状态,那么在微调期间无法消除后门功能。在自然语言处理(NLP)和计算机视觉(CV)任务的实验中,我们显示NeuBA绝对控制触发点的预测,同时不影响清洁数据的模型性能。最后,我们发现重新启用无法抵抗Neuba(NeubBA),并讨论一些可能的减轻普遍脆弱性的方向。我们的调查结果为广泛使用PTM(NTM)的红色警报。我们的源代码和数据可以在PUBA/AM/MUBA/NU}数据上查看。