Similarly to other connectionist models, Graph Neural Networks (GNNs) lack transparency in their decision-making. A number of sub-symbolic approaches have been developed to provide insights into the GNN decision making process. These are first important steps on the way to explainability, but the generated explanations are often hard to understand for users that are not AI experts. To overcome this problem, we introduce a conceptual approach combining sub-symbolic and symbolic methods for human-centric explanations, that incorporate domain knowledge and causality. We furthermore introduce the notion of fidelity as a metric for evaluating how close the explanation is to the GNN's internal decision making process. The evaluation with a chemical dataset and ontology shows the explanatory value and reliability of our method.
翻译:与其他连接型模型类似,图神经网络(GNNs)在决策方面缺乏透明度,已经制定了一些次符号方法,以提供对GNN决策过程的深入了解,这些是解释方法的最初重要步骤,但所产生的解释对于不是AI专家的用户来说往往难以理解。为了解决这一问题,我们引入了一种概念方法,将次符号和象征方法相结合,用于以人为中心的解释,纳入域知识和因果关系。我们进一步引入了忠诚概念,作为衡量解释与GNN内部决策过程的接近程度的标准。用化学数据集和本体学进行的评估显示了我们方法的解释价值和可靠性。