Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
翻译:医疗图象分解是支持疾病诊断和治疗效果评估所必不可少的。尽管在人工智能(AI)对医疗分解系统的作用方面有相当大的进步,但临床医生对人工智能(AI)的效用仍然持怀疑态度,对这种黑盒系统的信心很低,这一问题由于分配外数据(OOD)的概括化程度低而更加严重。为了实现有效的临床利用,我们提出了一个名为“证据能力”的基础模型,通过不确定性估计,使盒子具有可量化的透明度。证据能力不仅使AI在不确定地区和OOOD数据中可见,而且还提高了AI的可靠性、稳健性和计算效率。不确定性是通过主观逻辑理论的明显模型来从特征中收集有力证据的。我们展示了《证据能力》在三个分解数据集中的效用,并将其应用于诊所。我们的工作为临床安全应用和可解释的AI提供了亮度,并有助于医疗领域的信誉。