Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they bring, adversarial examples are also valuable for providing insights into the weakness and blind-spots of DNNs. Thus, the interpretability of a DNN in the adversarial setting aims to explain the rationale behind its decision-making process and makes deeper understanding which results in better practical applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of neuron sensitivity which is measured by neuron behavior variation intensity against benign and adversarial examples. In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial setting. Based on that, we further propose to improve adversarial robustness by constraining the similarities of sensitive neurons between benign and adversarial examples which stabilizes the behaviors of sensitive neurons towards adversarial noises. Moreover, we demonstrate that state-of-the-art adversarial training methods improve model robustness by reducing neuron sensitivities which in turn confirms the strong connections between adversarial robustness and neuron sensitivity as well as the effectiveness of using sensitive neurons to build robust models. Extensive experiments on various datasets demonstrate that our algorithm effectively achieves excellent results.
翻译:深心神经网络(DNNs)很容易受到对抗性的例子的伤害,在这种例子中,以不可察觉的触动性投入的对抗性强健性使DNNs产生不正确的结果。尽管存在潜在的风险,但对抗性实例对于深入了解DNs的弱点和盲点也很有价值。因此,在对抗性环境中,DNN的可解释性旨在解释其决策过程背后的理由,并加深理解,从而导致更好的实际应用。为了解决这一问题,我们试图从神经敏感度的新角度,从神经行为与良性和对抗性实例差异强度的衡量,来解释深度模型的对抗性强性强性强性强性。在本文中,我们首先将对抗性敌性强性强的强性与神经敏感性敏感性之间的密切联系联系起来,因为敏感的神经神经性神经性强性能能能对模型和神经性强性研究的灵敏性能性反应性能,通过将稳健性模型和稳健的弹性关系转化为稳健性神经性神经性研究的敏感性,从而将稳健性模型和稳健性矩阵的可靠性转化为。