Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide acceptance in mission-critical applications such as medical diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has recently attracted much research attention. In this paper, based on our comprehensive taxonomy, we systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine, and discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science.
翻译:人造深层神经网络(DNN)代表的深层学习在许多涉及文字、图像、视频、图表等的重要领域取得了巨大成功。然而,DNN的黑箱性质已成为它们在医疗诊断和治疗等任务关键应用中被广泛接受的主要障碍之一。由于深层学习的巨大潜力,对神经网络的诠释最近引起了许多研究关注。 在本文件中,根据我们的综合分类,我们系统地审查了最近关于理解神经网络机制的研究,描述了可解释性的应用,特别是在医学方面,并讨论了可解释性研究的未来方向,例如模糊逻辑和大脑科学。