The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems. Due to this substantial increment in the utility of these systems, the use of artificial intelligence in many critical tasks has exploded. In the medical domain, medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks. However, these artificial intelligence systems are extremely complex and are considered black boxes by scientists, due to the difficulty in interpreting what exactly led to the predictions made by these models. When these systems are being used to assist high-stakes decision-making, it is extremely important to be able to understand, verify and justify the conclusions reached by the model. The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI). In this paper, we evaluated three different XAI methods across two convolutional neural network models trained to classify lung cancer from histopathological images. We visualized the outputs and analyzed the performance of these methods, in order to better understand how to apply explainable artificial intelligence in the medical domain.
翻译:在计算机视觉任务(如图象分类)中采用深层次的学习方法,使这些系统的运作迅速增加。由于这些系统使用率大幅度提高,许多关键任务中人工智能的使用已经爆发。在医疗领域,由于医学图像分类系统在很多任务中与人类医生高度精确和接近等同,因此采用了医学图像分类系统。然而,这些人工智能系统极其复杂,科学家认为是黑盒,因为难以解释这些模型确切导致的预测。当这些系统被用来协助高层决策时,极为重要的是能够理解、核实和证明模型得出的结论。正在使用的用于深入了解黑盒模型的研究技术是在可解释的人工智能领域(XAI)。在本文中,我们评估了两个经过训练的革命神经网络模型的三种不同的XAI方法,这些模型的目的是从他的病理图像中将肺癌分类。我们对这些结果进行了视觉化,并分析了这些方法的性能,以便更好地了解如何在医学领域应用可解释的人工智能。