Graph Neural Networks (GNNs) are widely adopted in advanced AI systems due to their capability of representation learning on graph data. Even though GNN explanation is crucial to increase user trust in the systems, it is challenging due to the complexity of GNN execution. Lately, many works have been proposed to address some of the issues in GNN explanation. However, they lack generalization capability or suffer from computational burden when the size of graphs is enormous. To address these challenges, we propose a multi-level GNN explanation framework based on an observation that GNN is a multimodal learning process of multiple components in graph data. The complexity of the original problem is relaxed by breaking into multiple sub-parts represented as a hierarchical structure. The top-level explanation aims at specifying the contribution of each component to the model execution and predictions, while fine-grained levels focus on feature attribution and graph structure attribution analysis based on knowledge distillation. Student models are trained in standalone modes and are responsible for capturing different teacher behaviors, later used for particular component interpretation. Besides, we also aim for personalized explanations as the framework can generate different results based on user preferences. Finally, extensive experiments demonstrate the effectiveness and fidelity of our proposed approach.
翻译:由于在图表数据上进行代表学习的能力,先进的独立信息系统系统广泛采用神经网络(GNNs),因为这些系统在图形数据上具有代表学习能力。尽管GNN的解释对于提高用户对系统的信任至关重要,但是由于GNN执行的复杂性,这是一个挑战性的问题。最近,在GNN的解释中提出了许多工作来解决一些问题。然而,在图形大小巨大时,它们缺乏一般化能力,或受到计算负担;为了应对这些挑战,我们提议了一个多层次的GNN解释框架,其依据的观察是,GNN是图形数据中多个组成部分的多式学习过程。最初问题的复杂性通过打破以等级结构为代表的多个子部分而得到放松。顶级解释的目的是具体说明每个组成部分对模型执行和预测的贡献,同时细化的层次侧重于基于知识蒸馏的特征归属和图形结构归属分析。学生模型经过独立培训,负责掌握不同的教师行为,随后用于特定组成部分解释。此外,我们还力求个人化解释,因为框架可以产生基于用户偏好度的不同结果。最后,广泛的实验展示了我们提议的可靠性和真实性。