We present a clustering-based explainability technique for digital pathology models based on convolutional neural networks. Unlike commonly used methods based on saliency maps, such as occlusion, GradCAM, or relevance propagation, which highlight regions that contribute the most to the prediction for a single slide, our method shows the global behaviour of the model under consideration, while also providing more fine-grained information. The result clusters can be visualised not only to understand the model, but also to increase confidence in its operation, leading to faster adoption in clinical practice. We also evaluate the performance of our technique on an existing model for detecting prostate cancer, demonstrating its usefulness.
翻译:我们提出了一种基于聚类的可解释性技术,用于解释基于卷积神经网络的数字病理学模型。与常用的基于显著性图的方法(如遮挡法、GradCAM或相关性传播)不同——这些方法仅突出对单张切片预测贡献最大的区域——我们的方法展示了所考虑模型的全局行为,同时提供更细粒度的信息。聚类结果不仅可用于可视化以理解模型,还能增强对其操作的信心,从而促进其在临床实践中的更快应用。我们还在一个用于检测前列腺癌的现有模型上评估了该技术的性能,证明了其有效性。