Since adversarial examples appeared and showed the catastrophic degradation they brought to DNN, many adversarial defense methods have been devised, among which adversarial training is considered the most effective. However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i.i.d. noise or occluded. In this paper, we propose a simple yet effective method called Input Gradient Distillation to release the inequality phenomena in $l_{\infty}$-adversarial training. Experiments show that while preserving the model's adversarial robustness, Input Gradient Distillation improves the model's robustness to i.i.d. noise and occlusion. Moreover, we formally explain why the equality of the model's saliency map can improve the model's robustness to i.i.d. noise or occlusion. Github:https://github.com/fhdnskfbeuv/Inuput-Gradient-Distillation
翻译:暂无翻译