Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-hoc methods fail to reveal the original reasoning process of GNNs raises the need of building GNNs with built-in interpretability. In this work, we propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs and provides a new perspective on the explanations of GNNs. In ProtGNN, the explanations are naturally derived from the case-based reasoning process and are actually used during classification. The prediction of ProtGNN is obtained by comparing the inputs to a few learned prototypes in the latent space. Furthermore, for better interpretability and higher efficiency, a novel conditional subgraph sampling module is incorporated to indicate which part of the input graph is most similar to each prototype in ProtGNN+. Finally, we evaluate our method on a wide range of datasets and perform concrete case studies. Extensive results show that ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.
翻译:尽管在图形神经网络(GNN)方面最近有所进展,但解释GNN的预测仍然具有挑战性。现有的解释方法主要侧重于后热解释,因为使用另一种解释模型来解释受过训练的GNN。后热方法未能揭示GNN的原始推理过程,这一事实表明需要用内在解释性来建立GNN的原始推理过程。在这项工作中,我们提议将原型图形神经网络(ProtGNNN)与GNN的原型结合起来,为GNN的解释提供新的视角。在ProtGNNNNNNNN中,解释自然地来自基于案例的推理过程,在分类期间实际使用。对ProtGNNN的预测是通过将投入与在潜在空间中学习的少数原型模型进行比较而获得的。此外,为了更好的解释性和效率,我们采用了一个新的附带条件的子图取样模块,以显示输入图的哪一部分与ProtGNNN+的原型最相似。最后,我们评估了广泛的数据集的方法,并进行了具体的案例研究。广泛的结果可以解释。