The interpretability of neural networks has recently received extensive attention. The previous prototype-based explainable networks involved prototype activation in both the reasoning process and interpretation process, which requires specific explainable structures for the prototype. This makes the network less accurate as it gains interpretability. To avoid this problem, we propose a new model: decoupling prototypical network (DProtoNet), which contains three modules. 1) encoder module: we propose unrestricted masks to generate expressive features and prototypes. 2) inference module: we propose a multi-image prototype learning method to update prototypes so that the network can learn generalized prototypes. 3) interpretation module: we propose multiple dynamic masks (MDM) decoder to explain the network, which generates heatmaps using the consistent activation of the original image and mask image at the detection nodes of the network. It decouples the inference module and interpretation module of a prototype-based network by avoiding the use of prototype activation to explain the network's decisions, so that the accuracy and interpretability of the network can be simultaneously improved. We test on multiple public general and medical datasets. The accuracy of our method is improved compared with the previous methods, which can be improved by up to 5%. DProtoNet achieves state-of-the-art interpretability.
翻译:最近人们广泛关注神经网络的可解释性。前一个基于原型的可解释网络最近得到广泛关注。前一个原型的可解释网络涉及推理过程和解释过程的原型激活,这需要为原型提供具体的可解释结构。这样,网络就变得不那么精确了。为了避免这一问题,我们提出了一个新模式:解开原型网络(DProtoNet),它包含三个模块。 1 编码模块:我们建议无限制的遮罩生成显微功能和原型。 2 推论模块:我们建议采用多图像原型学习方法更新原型,以便网络能够学习通用原型。 3 解释模块:我们提议多维能面罩(MDM)解码器来解释网络,这个网络使用原始图像的一致激活,在网络的探测节点上遮盖图像。它分解了原型网络的推断模块和解释模块,避免使用原型激活来解释网络的决定,从而可以同时改进网络的准确性和可解释性,这样可以同时改进网络的精确性和可同时改进。我们测试多个公共通用和医学数据结构,我们的方法的准确性与先前的版本相比,可以改进了。DPrototod的方法可以改进了。