We consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks in a classification task. Saliency interpretations of deterministic Neural Networks are remarkably brittle even when the attacks fail, i.e. for attacks that do not change the classification label. We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and even under direct attacks to the explanations. By leveraging recent results, we also provide a theoretical explanation of this result in terms of the geometry of the data manifold. Additionally, we discuss the stability of the interpretations of high level representations of the inputs in the internal layers of a Network. Our results demonstrate that Bayesian methods, in addition to being more robust to adversarial attacks, have the potential to provide more stable and interpretable assessments of Neural Network predictions.
翻译:在分类工作中,我们考虑了神经网络预测在对抗性攻击下基于显著理由的解释的稳定性问题,对确定性神经网络的清晰解释即使在攻击失败时,也就是对于不改变分类标签的攻击,也是非常不可靠的。我们从经验上表明,巴伊西亚神经网络提供的解释在对投入的对抗性干扰甚至直接攻击解释的情况下,相当稳定。我们利用最近的结果,从数据方位的几何角度从理论上解释了这一结果。此外,我们讨论了对网络内部层面投入高层次表述的解释的稳定性。我们的结果表明,巴伊西亚方法除了对对抗性攻击更加有力外,还有可能对神经网络预测提供更稳定和可解释的评估。