We consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks in a classification task. Saliency interpretations of deterministic Neural Networks are remarkably brittle even when the attacks fail, i.e. for attacks that do not change the classification label. We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations. By leveraging recent results, we also provide a theoretical explanation of this result in terms of the geometry of adversarial attacks. Additionally, we discuss the stability of the interpretations of high level representations of the inputs in the internal layers of a Network. Our results not only confirm that Bayesian Neural Networks are more robust to adversarial attacks, but also demonstrate that Bayesian methods have the potential to provide more stable and interpretable assessments of Neural Network predictions.
翻译:在分类工作中,我们考虑了神经网络预测在对抗性攻击下基于显著理由的解释的稳定性问题。对确定性神经网络的清晰解释即使在攻击失败时,也就是对于不改变分类标签的攻击,也是非常不可靠的。我们从经验上表明,巴伊西亚神经网络提供的解释在对抗性干扰下相当稳定。通过利用最近的结果,我们还从对抗性攻击的几何学角度从理论上解释了这一结果。此外,我们讨论了对网络内部层面投入高水平表述的解释的稳定性。我们的结果不仅证实巴伊西亚神经网络对对抗性攻击更为强大,而且还表明巴伊西亚方法有可能提供更稳定和可解释的神经网络预测评估。