The clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and nonlinear transformations along deep hierarchies. In this paper, we propose a new attribution method, Relative Sectional Propagation (RSP), for fully decomposing the output predictions with the characteristics of class-discriminative attributions and clear objectness. We carefully revisit some shortcomings of backpropagation-based attribution methods, which are trade-off relations in decomposing DNNs. We define hostile factor as an element that interferes with finding the attributions of the target and propagate it in a distinguishable way to overcome the non-suppressed nature of activated neurons. As a result, it is possible to assign the bi-polar relevance scores of the target (positive) and hostile (negative) attributions while maintaining each attribution aligned with the importance. We also present the purging techniques to prevent the decrement of the gap between the relevance scores of the target and hostile attributions during backward propagation by eliminating the conflicting units to channel attribution map. Therefore, our method makes it possible to decompose the predictions of DNNs with clearer class-discriminativeness and detailed elucidations of activation neurons compared to the conventional attribution methods. In a verified experimental environment, we report the results of the assessments: (i) Pointing Game, (ii) mIoU, and (iii) Model Sensitivity with PASCAL VOC 2007, MS COCO 2014, and ImageNet datasets. The results demonstrate that our method outperforms existing backward decomposition methods, including distinctive and intuitive visualizations.
翻译:深神经网络(DNNS)的清晰透明度受到内部结构复杂和深等级结构上非线性转变的阻碍。 在本文中,我们提出一种新的归因方法,即相对分层推进(RSP),将输出预测完全分解为等级差异属性和明确目标特性的特点;我们仔细地重新审视基于回向的归因方法的一些缺陷,即对DNNS进行分解时的权衡关系。我们把敌对因素定义为干扰目标归属和以可辨别的方式加以传播的一个要素,以克服激活神经元的非压抑性质。因此,有可能将目标(正)和敌对(负)属性的双极关联性分数与每个归因都与重要性相一致。我们还介绍了在后向后传播过程中防止目标分级和敌意归属分级之间差距缩小的技术,为此消除了与频道归属图相矛盾的单位。 因此,我们采用的方法,包括较清晰的内向下级的内向级分析结果,以及比级的内向级分析结果。