Although deep neural networks (DNN) have achieved state-of-the-art performance in various fields, some unexpected errors are often found in the neural network, which is very dangerous for some tasks requiring high reliability and high security. The non-transparency and unexplainably of Convolutional Neural Networks (CNN) still limit its application in many fields, such as medical care and finance. Despite current studies that have been committed to visualizing the decision process of DNN, most of these methods focus on the low level and do not take into account the prior knowledge of medicine. In this work, we propose an interpretable framework based on key medical concepts, enabling CNN to explain from the perspective of doctors' cognition. We propose an interpretable automatic recognition framework for the ultrasonic standard plane, which uses a concept-based graph convolutional neural network to construct the relationships between key medical concepts, to obtain an interpretation consistent with a doctor's cognition. Extensive experiments have empirically shown that our model can meaningfully explain the decision of the classifier and provide quantitative support.
翻译:尽管深度神经网络(DNN)在不同领域取得了最先进的性能,但在神经网络中经常发现一些出乎意料的错误,这对一些需要高度可靠和高度安全的任务非常危险。进化神经网络(CNN)的不透明和不可解释性仍然限制其在许多领域的应用,如医疗和财务等。尽管目前致力于直观DNN决策过程的研究,但这些方法大多侧重于低水平,没有考虑到以前对医学的了解。在这项工作中,我们提出了一个基于关键医学概念的可解释框架,使CNN能够从医生认知的角度解释。我们提议了一个超声波标准平面可解释的自动识别框架,它使用基于概念的图形神经网络来构建关键医学概念之间的关系,以获得与医生认知一致的解释。广泛的实验从经验上表明,我们的模型可以有意义地解释分类师的决定并提供定量支持。