The growing number of adversarial attacks in recent years gives attackers an advantage over defenders, as defenders must train detectors after knowing the types of attacks, and many models need to be maintained to ensure good performance in detecting any upcoming attacks. We propose a way to end the tug-of-war between attackers and defenders by treating adversarial attack detection as an anomaly detection problem so that the detector is agnostic to the attack. We quantify the statistical deviation caused by adversarial perturbations in two aspects. The Least Significant Component Feature (LSCF) quantifies the deviation of adversarial examples from the statistics of benign samples and Hessian Feature (HF) reflects how adversarial examples distort the landscape of the model's optima by measuring the local loss curvature. Empirical results show that our method can achieve an overall ROC AUC of 94.9%, 89.7%, and 94.6% on CIFAR10, CIFAR100, and SVHN, respectively, and has comparable performance to adversarial detectors trained with adversarial examples on most of the attacks.
翻译:近年来不断增多的对抗性攻击使攻击者比维权者有优势,因为维权者在了解攻击类型后必须训练探测器,需要维持许多模式,以确保在发现任何即将发生的攻击方面表现良好。我们建议将攻击者和维权者之间的拖网战作为反常探测问题处理,从而使对立性攻击探测对攻击具有不可理喻性。我们量化了对立性干扰在两个方面造成的统计偏差。最不重要的构成部分特征(LSCF)量化了从良性样品和赫森特征(HF)统计中得出的对立性例子的偏差,反映了对立性例子如何通过测量当地损失曲线来扭曲该模型的景观。实证结果表明,我们的方法可以分别达到94.9%、89.7%和94.6%的RCAUC在CFAR10、CIFAR100和SVHN上,其表现与大多数攻击都受过对抗性探测的对立性探测器相当。