In this paper, an enhancement technique for the class activation mapping methods such as gradient-weighted class activation maps or excitation backpropagation is proposed to present the visual explanations of decisions from convolutional neural network-based models. The proposed idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output. Instead of producing a coarse localization map that highlights the important predictive regions in the image, the proposed method outputs the specific shape that most contributes to the model output. Thus, the proposed method improves the accuracy of saliency maps. The effect has been achieved by the gradual propagation of the crude map obtained in the deep layer through all preceding layers with respect to their activations. In validation tests conducted on a selected set of images, the faithfulness, interpretability, and applicability of the method are evaluated. The proposed technique significantly improves the localization detection of the neural networks attention at low additional computational costs. Furthermore, the proposed method is applicable to a variety deep neural network models. The code for the method can be found at https://github.com/szandala/gradual-extrapolation
翻译:在本文中,提议对类动图绘制方法,如梯度加权级活化级图或刺激回推进图等增强技术,以展示对以神经神经网络为基础的模型所作决定的直观解释。提议的想法称为“渐进外推法”,可以补充产生热映射的任何方法,使输出更加精准。拟议的方法不是制作粗略的本地化图,突出图像中的重要预测区域,而是产生最有助于模型输出的具体形状。因此,拟议的方法提高了显性图的准确性。通过所有前层逐渐传播在深层取得的粗图,取得了这些图的激活效果。在对一组选定图像进行的验证测试中,对方法的忠实性、可解释性和适用性进行了评估。拟议的技术大大改进了神经网络关注的本地化检测,提高了计算成本。此外,拟议的方法适用于各种深层神经网络模型。该方法的代码可见于 https://github.com/szandala/gradal-extrational-traation。