Deep neural networks (DNNs) are known to be vulnerable to both backdoor attacks as well as adversarial attacks. In the literature, these two types of attacks are commonly treated as distinct problems and solved separately, since they belong to training-time and inference-time attacks respectively. However, in this paper we find an intriguing connection between them: for a model planted with backdoors, we observe that its adversarial examples have similar behaviors as its triggered images, i.e., both activate the same subset of DNN neurons. It indicates that planting a backdoor into a model will significantly affect the model's adversarial examples. Based on these observations, a novel Progressive Backdoor Erasing (PBE) algorithm is proposed to progressively purify the infected model by leveraging untargeted adversarial attacks. Different from previous backdoor defense methods, one significant advantage of our approach is that it can erase backdoor even when the clean extra dataset is unavailable. We empirically show that, against 5 state-of-the-art backdoor attacks, our PBE can effectively erase the backdoor without obvious performance degradation on clean samples and significantly outperforms existing defense methods.
翻译:深心神经网络(DNNs)已知很容易受到幕后攻击和对抗性攻击的伤害。 在文献中,这两种类型的攻击通常被视为截然不同的问题,单独解决,因为它们分别属于训练时间和推断时间的攻击。然而,在本文中,我们发现它们之间的一种令人感兴趣的联系:对于一个用后门设计的模型,我们发现其对抗性例子具有与其触发图像相似的行为,即,两者都激活了DNN神经元的同一子组。它表明,将后门植入一个模型将极大地影响模型的对抗性例子。根据这些观察,提出了一个新的“进步后门EBE”算法,通过利用非有针对性的对抗性攻击逐步净化受感染的模型。不同于以往的后门防御方法,我们方法的一个重要优点是,即使没有清洁的额外数据集,它也能消除后门。我们从经验上表明,在5次状态的后门攻击中,我们的PBE可以有效地消除后门,而没有明显地在干净的样品上和明显地超出现有防御方法的性能降解。