Artificial intelligence holds great promise in medical imaging, especially histopathological imaging. However, artificial intelligence algorithms cannot fully explain the thought processes during decision-making. This situation has brought the problem of explainability, i.e., the black box problem, of artificial intelligence applications to the agenda: an algorithm simply responds without stating the reasons for the given images. To overcome the problem and improve the explainability, explainable artificial intelligence (XAI) has come to the fore, and piqued the interest of many researchers. Against this backdrop, this study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM), one of the XAI applications. Afterwards, a detailed questionnaire survey was conducted with the pathologists on these images. Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested. The research results greatly help pathologists in the diagnosis of paratuberculosis.
翻译:人工智能在医学成像,特别是组织病理学成像方面有很大的希望。然而,人工智能算法无法充分解释决策过程中的思维过程。这种情况已经将人工智能应用的可解释性,即黑盒问题,带到了议事日程上:一种算法简单回答而不说明给定图像的原因。为了克服问题并改进可解释性,可以解释的人工智能(XAI)已经来到前方,并且引起了许多研究人员的兴趣。在这个背景下,这项研究利用深层次的学习算法来审查一个新的和原始数据集,并将产出与XAI应用中的一个梯度加权级活性绘图(Grad-CAM)进行可视化。随后,与这些图像的病理学家进行了详细的问卷调查。对决策过程和解释进行了核实,并对产出的准确性进行了测试。研究结果极大地帮助病理学家诊断了帕图肺病。