Recent researches show that deep learning model is susceptible to backdoor attacks. Many defenses against backdoor attacks have been proposed. However, existing defense works require high computational overhead or backdoor attack information such as the trigger size, which is difficult to satisfy in realistic scenarios. In this paper, a novel backdoor detection method based on adversarial examples is proposed. The proposed method leverages intentional adversarial perturbations to detect whether an image contains a trigger, which can be applied in both the training stage and the inference stage (sanitize the training set in training stage and detect the backdoor instances in inference stage). Specifically, given an untrusted image, the adversarial perturbation is added to the image intentionally. If the prediction of the model on the perturbed image is consistent with that on the unperturbed image, the input image will be considered as a backdoor instance. Compared with most existing defense works, the proposed adversarial perturbation based method requires low computational resources and maintains the visual quality of the images. Experimental results show that, the backdoor detection rate of the proposed defense method is 99.63%, 99.76% and 99.91% on Fashion-MNIST, CIFAR-10 and GTSRB datasets, respectively. Besides, the proposed method maintains the visual quality of the image as the l2 norm of the added perturbation are as low as 2.8715, 3.0513 and 2.4362 on Fashion-MNIST, CIFAR-10 and GTSRB datasets, respectively. In addition, it is also demonstrated that the proposed method can achieve high defense performance against backdoor attacks under different attack settings (trigger transparency, trigger size and trigger pattern). Compared with the existing defense work (STRIP), the proposed method has better detection performance on all the three datasets, and is more efficient than STRIP.
翻译:最近的研究表明,深层次的学习模式很容易受到幕后攻击。 已经提出了许多针对幕后攻击的防御。 但是, 现有的防御工程需要高计算式的顶部或后门攻击信息, 如触发尺寸, 在现实的情景下很难满足。 在本文中, 提出了一个基于对抗性实例的新颖的后门探测方法 。 拟议的方法利用了有意的对抗性干扰来检测图像是否包含触发器, 可以在培训阶段和推断阶段同时应用( 保护培训阶段的培训设置, 在推断阶段探测幕后攻击事件 ) 。 具体地说, 现有的防御工程需要高计算式的间接或后门攻击信息, 例如: 不受信任的图像, 触发尺寸, 在不可靠的情况下, 触发性攻击的对抗性攻击, 在99. 633 上加上对抗性攻击的对抗性攻击 。