Graphs are ubiquitous in many applications, such as social networks, knowledge graphs, smart grids, etc.. Graph neural networks (GNN) are the current state-of-the-art for these applications, and yet remain obscure to humans. Explaining the GNN predictions can add transparency. However, as many graphs are not static but continuously evolving, explaining changes in predictions between two graph snapshots is different but equally important. Prior methods only explain static predictions or generate coarse or irrelevant explanations for dynamic predictions. We define the problem of explaining evolving GNN predictions and propose an axiomatic attribution method to uniquely decompose the change in a prediction to paths on computation graphs. The attribution to many paths involving high-degree nodes is still not interpretable, while simply selecting the top important paths can be suboptimal in approximating the change. We formulate a novel convex optimization problem to optimally select the paths that explain the prediction evolution. Theoretically, we prove that the existing method based on Layer-Relevance-Propagation (LRP) is a special case of the proposed algorithm when an empty graph is compared with. Empirically, on seven graph datasets, with a novel metric designed for evaluating explanations of prediction change, we demonstrate the superiority of the proposed approach over existing methods, including LRP, DeepLIFT, and other path selection methods.
翻译:图形神经网络(GNN)是目前这些应用的最新状态,但对人类来说仍然模糊不清。解释GNN预测可以增加透明度。然而,由于许多图表不是静态的,而是不断演变,解释两个图形快照之间预测的变化是不同的,但同样重要。先前的方法只能解释静态预测,或为动态预测产生粗糙或无关的解释。我们定义了解释不断演变的GNN预测的问题,并提出了一种不言而喻的归因方法,以在计算图的预测路径中将变化单独地分解。对于涉及高度节点的许多路径的归因仍然无法解释,而简单地选择顶级重要路径在接近变化时可能是次优的。我们设计了一个新颖的螺旋优化问题,以最佳的方式选择解释预测进化的方法。理论上,我们证明基于高度GNNN预测预测预测预测的当前方法(LRP)的独特分解方法,在对高度预测图进行对比时,我们所设计的RP的深度选择方法(LRP)是一个特殊的例子, 与目前所设计的预测图的空图解算法相比, 包括新的RPRP 。