The apparent ``black box'' nature of neural networks is a barrier to adoption in applications where explainability is essential. This paper presents TAME (Trainable Attention Mechanism for Explanations), a method for generating explanation maps with a multi-branch hierarchical attention mechanism. TAME combines a target model's feature maps from multiple layers using an attention mechanism, transforming them into an explanation map. TAME can easily be applied to any convolutional neural network (CNN) by streamlining the optimization of the attention mechanism's training method and the selection of target model's feature maps. After training, explanation maps can be computed in a single forward pass. We apply TAME to two widely used models, i.e. VGG-16 and ResNet-50, trained on ImageNet and show improvements over previous top-performing methods. We also provide a comprehensive ablation study comparing the performance of different variations of TAME's architecture. TAME source code is made publicly available at https://github.com/bmezaris/TAME
翻译:显而易见的“ 黑盒” 神经网络的性质阻碍了神经网络在解释性至关重要的应用应用中被采纳。 本文展示了TAME( 解释性关注机制), 这是一种用多部门层次关注机制绘制解释性地图的方法。 TAME 使用关注机制将目标模型的多层特征地图组合在一起,将其转化为解释性地图。 TAME 可以通过简化关注机制培训方法和选择目标模型特征地图的优化,很容易应用到任何革命性神经网络(CNN ) 。 培训后, 解释性地图可以用一个前方通行证来计算。 我们将TAME 应用于两种广泛使用的模式, 即VGG-16 和ResNet- 50, 在图像网络上接受培训, 并展示与以往最佳方法相比的改进。 我们还提供了一份综合的模拟研究, 比较TAME 结构不同变异的性表现。 TAME 源代码公布在 https://github.com/bmezaris/TAME 上。