In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities. Our theoretical analysis shows that the PGM generated by PGM-Explainer includes the Markov-blanket of the target prediction, i.e. including all its statistical information. We also show that the explanation returned by PGM-Explainer contains the same set of independence statements in the perfect map. Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.
翻译:在图形神经网络(GNNs)中,图形结构被纳入了节点表示的学习中。这种复杂的结构使得解释GNns预测变得更具挑战性。在本文中,我们提议了GM-Explainer(PGM-Explainer),这是GNS(PGM-Explainal mob-Agnistic explainer)的概率性图形模型模型解释器。根据需要解释的预测,PGM-Explainer(PGM-Explainer)确定了关键图形组件,并以PGM-Explainer(PGM-Explainer)的对应预测形式产生解释。GM-Explainer(PGM-Explainer)的解释与从一组解释性功能的线性功能中提取解释的GNNNS现有解释器不同, PGM-Exlainer(PGM-GMexplaineral)能够以有条件的概率形式展示解释所解释的特征的可靠性。我们的实验表明,PGMGMD-Explaineralers在现有的基准任务中,其业绩比现有任务要好得多。