Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image's ground-truth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model's input gradients around data points will more closely align with boundaries' normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradient-based attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: \emph{boundary attributions}, which aggregate information about the normal vectors of local decision boundaries to explain a classification outcome. We show that by leveraging the key factors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations -- even on non-robust models. Any example implementation can be found at \url{https://github.com/zifanw/boundary}.
翻译:最近的工作发现,用于图像分类的对抗- robust 深网络比较容易解释: 其特征属性往往更加清晰, 更集中于与图像地面真相类相关的对象。 我们显示, 平稳的决定边界在这种增强解释性方面起着重要作用, 因为模型围绕数据点的输入梯度在数据点周围将更密切地与边界的正常矢量相匹配, 当它们平滑的时候。 因此, 由于稳健的模型具有更平滑的边界, 以梯度为基础的归属方法, 如综合梯度和深海利夫特, 其结果将捕捉到关于附近决定界限的更准确的信息。 这种对稳健可解释性的理解导致我们的第二个贡献 :\ emph{ 边际属性, 将关于本地决定边界的正常矢量的信息汇总, 解释结果。 我们显示, 通过利用支撑稳健可解释性的关键因素, 边界属性产生更清晰、 更集中的视觉解释 -- 甚至在非紫外线模型上。 任何实例的实施可以在\ url{ http:// github.com/ fisterw/ briary} 中找到 。