We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training. For this, we propose to replace the linear transforms in DNNs by our B-cos transform. As we show, a sequence (network) of such transforms induces a single linear transform that faithfully summarises the full model computations. Moreover, the B-cos transform introduces alignment pressure on the weights during optimisation. As a result, those induced linear transforms become highly interpretable and align with task-relevant features. Importantly, the B-cos transform is designed to be compatible with existing architectures and we show that it can easily be integrated into common models such as VGGs, ResNets, InceptionNets, and DenseNets, whilst maintaining similar performance on ImageNet. The resulting explanations are of high visual quality and perform well under quantitative metrics for interpretability. Code available at https://www.github.com/moboehle/B-cos.
翻译:我们提出了提高深神经网络可解释性的新方向,方法是在培训期间促进加权-投入调整。 为此,我们提议用我们的B-cos变换取代DNNs的线性变换。 正如我们所显示的那样,这种变换的顺序(网络)导致单一线性变换,忠实地概括了完整的模型计算。此外,B-cos在优化过程中对重量的调整压力带来了压力。因此,这些引致的线性变变变得高度可解释性,并与任务相关特性相一致。重要的是,B-cos变换设计与现有结构兼容,我们表明这种变换很容易与VGHs、ResNets、InceptionionNets和DenseNets等通用模型结合,同时在图像网络上保持类似的性能。由此产生的解释是高视觉质量,在可解释的定量指标下表现良好。可在https://www.github.com/moboyehle/B-cos查阅的代码。