Deep learning has shown impressive performance on challenging perceptual tasks. However, researchers found deep neural networks vulnerable to adversarial examples. Since then, many methods are proposed to defend against or detect adversarial examples, but they are either attack-dependent or shown to be ineffective with new attacks. We propose DAFAR, a feedback framework that allows deep learning models to detect adversarial examples in high accuracy and universality. DAFAR has a relatively simple structure, which contains a target network, a plug-in feedback network and an autoencoder-based detector. The key idea is to capture the high-level features extracted by the target network, and then reconstruct the input using the feedback network. These two parts constitute a feedback autoencoder. It transforms the imperceptible-perturbation attack on the target network directly into obvious reconstruction-error attack on the feedback autoencoder. Finally the detector gives an anomaly score and determines whether the input is adversarial according to the reconstruction errors. Experiments are conducted on MNIST and CIFAR-10 data-sets. Experimental results show that DAFAR is effective against popular and arguably most advanced attacks without losing performance on legitimate samples, with high accuracy and universality across attack methods and parameters.
翻译:深层的学习显示,在具有挑战性的认知任务上取得了令人印象深刻的成绩。然而,研究人员发现深心神经网络易受对抗性实例的影响。此后,提出了许多方法来防御或探测对抗性实例,但这两个方法要么依赖攻击,要么被新的攻击显示无效。我们提出DAFAR,这是一个反馈框架,使深入学习的模型能够以高度精确和普遍性的方式检测对抗性实例。DAFAR有一个相对简单的结构,其中包括一个目标网络、一个插头反馈网络和一个以自动电解器为基础的探测器。关键的想法是捕捉目标网络提取的高层次特征,然后利用反馈网络重建输入。这两个部分构成反馈自动编码器。它把对目标网络的不可察觉性干扰攻击直接转化为对反馈自动电解器的明显重建性攻击。最后,探测器给出了一个异常的分数,并决定输入内容是否与重建错误为对抗性。对MNIST和CIFAR-10数据集进行了实验。实验结果表明,DAFAR对大众和可以说最先进的攻击是有效的,而没有在合法的样品上丧失了高度的精确性。