Recently, subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of Graph Neural Networks (GNNs), which was proved to be not higher than the 1-dimensional Weisfeiler-Leman isomorphism test. The new paradigm suggests using subgraphs extracted from the input graph to improve the model's expressiveness, but the additional complexity exacerbates an already challenging problem in GNNs: explaining their predictions. In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs. The proposed explainer accounts for the contribution of all the different subgraphs and can produce a meaningful explanation that humans can interpret. The experiments that we performed both on real and synthetic datasets show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
翻译:最近,引入了子图强化图形神经网络(SGNNS),以加强图形神经网络(GNNS)的表达力。 事实证明,这种表达力并不高于一维Weisfeiler-Leman的形态测试。 新的范式建议使用从输入图中提取的子图来改进模型的表达性,但额外的复杂性加剧了GNS中一个已经具有挑战性的问题:解释它们的预测。 在这项工作中,我们把PGExtrainer(GNS的最新解释者之一)改编为SGNS。 提议的PGExplainer解释了所有不同子图的贡献,并可以提出人类可以解释的有意义的解释。 我们在真实和合成数据集上进行的实验表明,我们的框架成功地解释了SGNN关于图表分类任务的决策过程。