Recent advances in deep learning have led to its widespread adoption across diverse domains, including medical imaging. This progress is driven by increasingly sophisticated model architectures, such as ResNets, Vision Transformers, and Hybrid Convolutional Neural Networks, that offer enhanced performance at the cost of greater complexity. This complexity often compromises model explainability and interpretability. SHAP has emerged as a prominent method for providing interpretable visualizations that aid domain experts in understanding model predictions. However, SHAP explanations can be unstable and unreliable in the presence of epistemic and aleatoric uncertainty. In this study, we address this challenge by using Dirichlet posterior sampling and Dempster-Shafer theory to quantify the uncertainty that arises from these unstable explanations in medical imaging applications. The framework uses a belief, plausible, and fusion map approach alongside statistical quantitative analysis to produce quantification of uncertainty in SHAP. Furthermore, we evaluated our framework on three medical imaging datasets with varying class distributions, image qualities, and modality types which introduces noise due to varying image resolutions and modality-specific aspect covering the examples from pathology, ophthalmology, and radiology, introducing significant epistemic uncertainty.


翻译:深度学习的最新进展已使其在包括医学影像在内的多个领域得到广泛应用。这一进展得益于日益复杂的模型架构,如ResNet、Vision Transformer和混合卷积神经网络,这些架构以更高的复杂性为代价提供了更强的性能。这种复杂性常常会损害模型的可解释性。SHAP已成为一种重要的方法,能够提供可解释的可视化结果,帮助领域专家理解模型预测。然而,在存在认知不确定性和偶然不确定性的情况下,SHAP的解释可能不稳定且不可靠。在本研究中,我们通过使用狄利克雷后验采样和Dempster-Shafer理论来量化医学影像应用中这些不稳定解释所产生的不确定性,以应对这一挑战。该框架采用信念图、合理图和融合图方法,并结合统计定量分析,实现对SHAP不确定性的量化。此外,我们在三个具有不同类别分布、图像质量和模态类型的医学影像数据集上评估了我们的框架,这些数据集因图像分辨率差异和模态特异性方面(涵盖病理学、眼科学和放射学示例)而引入噪声,从而带来了显著的认知不确定性。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员