Limited expert time is a key bottleneck in medical imaging. Due to advances in image classification, AI can now serve as decision-support for medical experts, with the potential for great gains in radiologist productivity and, by extension, public health. However, these gains are contingent on building and maintaining experts' trust in the AI agents. Explainable AI may build such trust by helping medical experts to understand the AI decision processes behind diagnostic judgements. Here we introduce and evaluate explanations based on Bayesian Teaching, a formal account of explanation rooted in the cognitive science of human learning. We find that medical experts exposed to explanations generated by Bayesian Teaching successfully predict the AI's diagnostic decisions and are more likely to certify the AI for cases when the AI is correct than when it is wrong, indicating appropriate trust. These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.
翻译:有限的专家时间是医学成像中的一个关键瓶颈。由于图像分类的进步,AI现在可以作为医学专家的决策支持,有可能在放射科生产力以及由此推展的公共卫生方面取得巨大收益。然而,这些收益取决于建立和保持专家对AI代理人的信任。可以解释的AI可以通过帮助医学专家理解诊断性判断背后的AI决定程序来建立这种信任。我们在这里介绍和评估基于Bayesian教学的解释,这是植根于人类学习认知科学的解释的正式说明。我们发现,接触Bayesian教学所产生解释的医学专家成功地预测了AI的诊断性决定,在AI是正确的情况下比错误的情况下更有可能证明AI。这些结果表明适当的信任。这些结果表明,可以解释的AI可以用来支持医学成像方面的人类-AI合作。