This paper presents Gem, a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks. Specifically, we formulate the problem of providing explanations for the decisions of GNNs as a causal learning task. Then we train a causal explanation model equipped with a loss function based on Granger causality. Different from existing explainers for GNNs, Gem explains GNNs on graph-structured data from a causal perspective. It has better generalization ability as it has no requirements on the internal structure of the GNNs or prior knowledge on the graph learning tasks. In addition, Gem, once trained, can be used to explain the target GNN very quickly. Our theoretical analysis shows that several recent explainers fall into a unified framework of additive feature attribution methods. Experimental results on synthetic and real-world datasets show that Gem achieves a relative increase of the explanation accuracy by up to $30\%$ and speeds up the explanation process by up to $110\times$ as compared to its state-of-the-art alternatives.
翻译:本文介绍了Gem, 这是一种为各种图表学习任务中的任何GNN提供解释性解释的模型----Gem, 这是一种为任何GNN提供解释性解释的模型----不可知性方法。 具体地说,我们将解释GNN决定的问题作为因果学习任务来阐述。 然后,我们培训一个基于因果功能的因果解释模型。 Gem与现有的GNNS解释者不同, Gem从因果角度解释图形结构数据中的GNN, 因为它对GNN的内部结构没有要求, 或对图形学习任务的先前知识也没有要求。 此外, Gem一旦受过培训, 可以很快地用来解释目标GNNN。 我们的理论分析表明,最近的一些解释者进入了添加特性归属方法的统一框架。 合成和现实世界数据集的实验结果显示,Gem相对解释性精确度提高了30美元, 并加快了解释过程, 与最先进的替代方法相比, 高达110美元。