Although deep neural networks (DNN) have achieved state-of-the-art performance in various fields, some unexpected errors are often found in the neural network, which is very dangerous for some tasks requiring high reliability and high security.The non-transparency and unexplainably of CNN still limit its application in many fields, such as medical care and finance. Despite current studies that have been committed to visualizing the decision process of DNN, most of these methods focus on the low level and do not take into account the prior knowledge of medicine.In this work, we propose an interpretable framework based on key medical concepts, enabling CNN to explain from the perspective of doctors' cognition.We propose an interpretable automatic recognition framework for the ultrasonic standard plane, which uses a concept-based graph convolutional neural network to construct the relationships between key medical concepts, to obtain an interpretation consistent with a doctor's cognition.
翻译:虽然深层神经网络(DNN)在不同领域取得了最先进的性能,但在神经网络中经常发现一些出乎意料的错误,这对一些需要高度可靠和高度安全的任务非常危险。 CNN的不透明和不可解释性仍然限制其在许多领域的应用,例如医疗和财务。尽管目前正在进行一些研究,致力于直观DNN的决策过程,但这些方法大多侧重于低水平,没有考虑到以前对医学的了解。 在这项工作中,我们提出了一个基于关键医学概念的可解释框架,使CNN能够从医生认知的角度解释。我们建议为超声波标准平面建立一个可解释的自动识别框架,该平面使用基于概念的图形神经网络来构建关键医学概念之间的关系,以获得符合医生认知的解释。