This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability. The complexity of machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models. In this paper, we present a categorization of current research into three main categories which either integrate knowledge into the machine learning pipeline, into the explainability method or derive knowledge from explanations. To classify the papers, we build upon the existing taxonomy of informed machine learning and extend it from the perspective of explainability. We conclude with open challenges and research directions.
翻译:本次调查概述了将先前的知识纳入机器学习系统的情况,以改进解释性;机器学习模型的复杂性已引起研究,使其更能解释;然而,多数解释性方法无法提供超出特定数据之外的洞察力,因此需要更多有关背景的信息;我们提议利用先前的知识来提高机器学习模型的解释能力;在本文件中,我们将目前的研究分为三大类,要么将知识纳入机器学习管道,将知识纳入解释性方法,要么从解释中获取知识;为了对文件进行分类,我们利用现有的知情机器学习分类法,并从解释性角度加以扩展;我们最后提出公开的挑战和研究方向。