Deep neural networks have been well-known for their superb handling of various machine learning and artificial intelligence tasks. However, due to their over-parameterized black-box nature, it is often difficult to understand the prediction results of deep models. In recent years, many interpretation tools have been proposed to explain or reveal how deep models make decisions. In this paper, we review this line of research and try to make a comprehensive survey. Specifically, we first introduce and clarify two basic concepts -- interpretations and interpretability -- that people usually get confused about. To address the research efforts in interpretations, we elaborate the designs of a number of interpretation algorithms, from different perspectives, by proposing a new taxonomy. Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms. Further, we summarize the current works in evaluating models' interpretability using "trustworthy" interpretation algorithms. Finally, we review and discuss the connections between deep models' interpretations and other factors, such as adversarial robustness and learning from interpretations, and we introduce several open-source libraries for interpretation algorithms and evaluation approaches.
翻译:深心神经网络因其超能处理各种机器学习和人工智能任务而广为人知。然而,由于其超分的黑盒性质,往往难以理解深层模型的预测结果。近年来,提出了许多解释工具来解释或揭示深度模型的决策。在本文件中,我们审查了这一研究线并试图进行全面调查。具体地说,我们首先提出并澄清人们通常会混淆的两个基本概念 -- -- 解释和可解释性 -- -- 。为了处理解释方面的研究工作,我们从不同的角度,通过提出新的分类法,详细设计了一些解释算法的设计。然后,为了理解解释结果,我们还调查了用于评价解释算法的业绩计量。此外,我们总结了目前使用“可信”解释算法评估模型可解释性的工作。最后,我们审查并讨论深模型解释与其他因素之间的联系,例如对抗性强力和从解释中学习,我们引入了若干用于解释算法和评价方法的开放源图书馆。