It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: images with imperceptible perturbations crafted to fool classifiers. However, interpretability of these perturbations is less explored in the literature. This work aims to better understand the roles of adversarial perturbations and provide visual explanations from pixel, image and network perspectives. We show that adversaries have a promotion-suppression effect (PSE) on neurons' activations and can be primarily categorized into three types: i) suppression-dominated perturbations that mainly reduce the classification score of the true label, ii) promotion-dominated perturbations that focus on boosting the confidence of the target label, and iii) balanced perturbations that play a dual role in suppression and promotion. We also provide image-level interpretability of adversarial examples. This links PSE of pixel-level perturbations to class-specific discriminative image regions localized by class activation mapping (Zhou et al. 2016). Further, we examine the adversarial effect through network dissection (Bau et al. 2017), which offers concept-level interpretability of hidden units. We show that there exists a tight connection between the units' sensitivity to adversarial attacks and their interpretability on semantic concepts. Lastly, we provide some new insights from our interpretation to improve the adversarial robustness of networks.
翻译:众所周知,共生神经网络(CNNs)容易受到对抗性例子的伤害:一)压制性占支配地位的扰动,主要是降低真实标签的分类分数,二)促进性占支配地位的扰动,重点是增强目标标签的信心,三)平衡的扰动,在压制和促进方面起着双重作用。我们还提供了对抗性实例的图像层面解释性。我们显示,对手对神经激活具有促动-压抑效应(PSE),主要可分为三类:(一) 压制性占支配地位,主要是降低真实标签的分类分数,二) 促进性占支配地位,重点是增强目标标签的信心,三) 平衡的扰动,在压制和促进方面起到双重作用。我们还提供了对抗性实例的图像层面解释性。这种将平级的扰动性影响与通过阶级激活性绘图(Zhou等人等人,2016年)。此外,我们研究了通过网络保密性分类产生的对抗性效应,即网络的敏感度(Bau et al.) 提供了对网络中隐藏性概念的某种解释性,即我们从对201717年的可变性概念的解读性,提供了我们之间存在的某种可理解性。