When we deploy machine learning models in high-stakes medical settings, we must ensure these models make accurate predictions that are consistent with known medical science. Inherently interpretable networks address this need by explaining the rationale behind each decision while maintaining equal or higher accuracy compared to black-box models. In this work, we present a novel interpretable neural network algorithm that uses case-based reasoning for mammography. Designed to aid a radiologist in their decisions, our network presents both a prediction of malignancy and an explanation of that prediction using known medical features. In order to yield helpful explanations, the network is designed to mimic the reasoning processes of a radiologist: our network first detects the clinically relevant semantic features of each image by comparing each new image with a learned set of prototypical image parts from the training images, then uses those clinical features to predict malignancy. Compared to other methods, our model detects clinical features (mass margins) with equal or higher accuracy, provides a more detailed explanation of its prediction, and is better able to differentiate the classification-relevant parts of the image.
翻译:当我们在高取量医学环境中部署机器学习模型时,我们必须确保这些模型作出与已知医学科学相一致的准确预测。内在的解释性网络通过解释每项决定背后的理由来应对这一需要,同时保持与黑箱模型相同或更高的精确度。在这项工作中,我们提出了一个新型的可解释神经网络算法,使用基于案例的推理进行乳房造影。设计我们的网络是为了帮助放射学家做出决策,我们的网络既能预测恶性,又能以已知医学特征解释这种预测。为了提供有益的解释,这个网络旨在模仿放射学家的推理过程:我们的网络首先通过比较每张新图像的临床相关语义特征,将每张新图像与一组从培训图像中学习的原型图像部分进行比较,然后使用这些临床特征来预测恶性。与其他方法相比,我们的模型以同等或更高的精确度检测临床特征(质量边距),对它的预测提供更详细的解释,并且能够更准确地区分图像的分类相关部分。