The introduction of robust optimisation has pushed the state-of-the-art in defending against adversarial attacks. Notably, the state-of-the-art projected gradient descent (PGD)-based training method has been shown to be universally and reliably effective in defending against adversarial inputs. This robustness approach uses PGD as a reliable and universal "first-order adversary". However, the behaviour of such optimisation has not been studied in the light of a fundamentally different class of attacks called backdoors. In this paper, we study how to inject and defend against backdoor attacks for robust models trained using PGD-based robust optimisation. We demonstrate that these models are susceptible to backdoor attacks. Subsequently, we observe that backdoors are reflected in the feature representation of such models. Then, this observation is leveraged to detect such backdoor-infected models via a detection technique called AEGIS. Specifically, given a robust Deep Neural Network (DNN) that is trained using PGD-based first-order adversarial training approach, AEGIS uses feature clustering to effectively detect whether such DNNs are backdoor-infected or clean. In our evaluation of several visible and hidden backdoor triggers on major classification tasks using CIFAR-10, MNIST and FMNIST datasets, AEGIS effectively detects PGD-trained robust DNNs infected with backdoors. AEGIS detects such backdoor-infected models with 91.6% accuracy (11 out of 12 tested models), without any false positives. Furthermore, AEGIS detects the targeted class in the backdoor-infected model with a reasonably low (11.1%) false positive rate. Our investigation reveals that salient features of adversarially robust DNNs could be promising to break the stealthy nature of backdoor attacks.
翻译:强力优化的引入推动了在防御对抗性攻击方面最先进的最先进的策略。值得注意的是,最先进的预测梯度下降(PGD)基础培训方法已经证明,在防御对抗性投入方面,最先进的预测梯度下降(PGD)基础培训方法具有普遍和可靠的效力。这种稳健方法将PGD作为可靠和普遍的“一级对手 ” 。然而,这种优化的行为还没有根据一个根本不同的攻击类别,即所谓的后门攻击来研究。在本文中,我们研究如何为使用PGD基础的低级对抗性培训方法培训的强健型模型在后门攻击中注入和防御后门攻击。我们证明这些模型很容易受到幕后攻击。随后,我们发现后门在这种模型的特征上反映了这些模式的特征。随后,这一观察被利用来检测出这样的后门感染模式,称为AGIS。具体地说,鉴于一个强大的深神经网络,可以使用基于PGDFIST的低级对抗性培训方法,AGIS的具体目标组合有效地检测这些DNF的后门型模型是否在后门上没有固定的内,而有弹性的机机能检测。