Integrated Gradients as an attribution method for deep neural network models offers simple implementability. However, it suffers from noisiness of explanations which affects the ease of interpretability. The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method. In this paper, we present SmoothTaylor as a novel theoretical concept bridging Integrated Gradients and SmoothGrad, from the Taylor's theorem perspective. We apply the methods to the image classification problem, using the ILSVRC2012 ImageNet object recognition dataset, and a couple of pretrained image models to generate attribution maps. These attribution maps are empirically evaluated using quantitative measures for sensitivity and noise level. We further propose adaptive noising to optimize for the noise scale hyperparameter value. From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.
翻译:作为深神经网络模型的一种归因方法,“综合梯度”为深神经网络模型提供了简单的可执行性。然而,它具有影响可解释性的解释的灵敏度。建议采用“平滑格”技术来解决任何基于梯度的归因问题并平滑任何基于梯度的归因图。在本文中,我们从泰勒的理论角度,将“平滑梯度”作为连接综合梯度和滑格的新理论概念提出来。我们使用“ILSVRC2012图像网对象识别数据集”和一些预先训练的图像模型来生成归因图,对这些归因图进行了实证性评估,使用了敏感度和噪声水平的定量措施。我们进一步建议调整“平滑法”与“适应节度”一道,能够产生质量更高的突出度地图,比“综合梯度”系统生成的输入空间相关点的噪音较少,敏感度更高。