State-of-the-art (SOTA) deep learning mammogram classifiers, trained with weakly-labelled images, often rely on global models that produce predictions with limited interpretability, which is a key barrier to their successful translation into clinical practice. On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity. We address these two issues with the proposal of BRAIxProtoPNet++, which adds interpretability to a global model by ensembling it with a prototype-based model. BRAIxProtoPNet++ distills the knowledge of the global model when training the prototype-based model with the goal of increasing the classification accuracy of the ensemble. Moreover, we propose an approach to increase prototype diversity by guaranteeing that all prototypes are associated with different training images. Experiments on weakly-labelled private and public datasets show that BRAIxProtoPNet++ has higher classification accuracy than SOTA global and prototype-based models. Using lesion localisation to assess model interpretability, we show BRAIxProtoPNet++ is more effective than other prototype-based models and post-hoc explanation of global models. Finally, we show that the diversity of the prototypes learned by BRAIxProtoPNet++ is superior to SOTA prototype-based approaches.
翻译:另一方面,原型模型通过将预测与培训图像原型(SOTA)相联系,提高了解释性。但原型模型不如全球模型,原型模型往往具有差异性。我们用BRAIXProtoPNet+++的提议来解决这两个问题,BRAIXProtoPNet++的提案通过将原型模型与基于原型模型的模型结合,增加了全球模型的可解释性。BRAIXProtoPNet++在培训原型模型时,提取了全球模型的知识,目的是提高共同点的分类准确性。此外,我们提出一种增加原型多样性的方法,保证所有原型模型都与不同的培训图像相联系。对标签不高的私人和公共数据集的实验表明,BRAIXProtoPNet++的分类准确性高于基于原型模型的全球模型和原型模型。我们使用原型本地化本地化的本地化方法来评估原型模型的可解释性,我们最终展示了SOVIA的模型解释性模型。