Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs.
翻译:GNN的黑箱性质妨碍了用户理解和信任这些模型,从而导致其应用方面的困难。近年来,关于解释GNN的研究十分繁忙,其中多数侧重于静态图形,使得动态GNN几乎没有被探索。解释动态GNN的动态GN是具有挑战性的,因为它们具有时间分布式图形结构的独特性。直接使用为动态图形静态图形设计的现有模型是不可行的,因为它们忽略了动态图形之间的时间依赖性。在这项工作中,我们建议DGExplainer提供关于动态GNNs的可靠解释。DGExplainer重新分配了动态GNN的输出激活分数与其前一层神经的关联性,直到获得输入神经的适值分数。我们在现实世界数据集上进行定量和定性实验,以展示为链接预测确定重要节点的拟议框架的有效性,以及动态GNNS的逆差。