Recent work on explaining Deep Neural Networks (DNNs) focuses on attributing the model's output scores to input features. However, when it comes to classification problems, a more fundamental question is how much does each feature contributes to the model's decision to classify an input instance into a specific class. Our first contribution is Boundary Attribution, a new explanation method to address this question. BA leverages an understanding of the geometry of activation regions. Specifically, they involve computing (and aggregating) normal vectors of the local decision boundaries for the target input. Our second contribution is a set of analytical results connecting the adversarial robustness of the network and the quality of gradient-based explanations. Specifically, we prove two theorems for ReLU networks: BA of randomized smoothed networks or robustly trained networks is much closer to non-boundary attribution methods than that in standard networks. These analytics encourage users to improve model robustness for high-quality explanations. Finally, we evaluate the proposed methods on ImageNet and show BAs produce more concentrated and sharper visualizations compared with non-boundary ones. We further demonstrate that our method also helps to reduce the sensitivity of attributions to the baseline input if one is required.
翻译:解释深神经网络(DNNS)的近期工作重点是将模型输出分数与输入特性挂钩。然而,在分类问题方面,一个更根本的问题是,每个特性对模型将输入实例分类到特定类别的决定有多大作用。我们的第一个贡献是“边界归属”,这是解决这一问题的一种新解释方法。BA利用了对激活区域几何学的理解。具体地说,它们涉及为目标输入计算(和综合)当地决定边界的正常矢量。我们的第二个贡献是一组分析结果,将网络的对抗性强力和梯度解释的质量联系起来。具体地说,我们证明ReLU网络的两个理论:随机化的平滑式网络或经过严格训练的网络的“BAA”,与标准网络中的非约束性归属方法非常接近。这些分析鼓励用户改进高品质解释的模型的稳健性。最后,我们评估了关于图像网络的拟议方法,并显示BAs与非约束性解释相比,产生更加集中和清晰的直观化。我们进一步证明,我们的方法也有助于降低一个基准的归属。