Prior works on formalizing explanations of a graph neural network (GNN) focus on a single use case - to preserve the prediction results through identifying important edges and nodes. In this paper, we develop a multi-purpose interpretation framework by acquiring a mask that indicates topology perturbations of the input graphs. We pack the framework into an interactive visualization system (GNNViz) which can fulfill multiple purposes: Preserve,Promote, or Attack GNN's predictions. We illustrate our approach's novelty and effectiveness with three case studies: First, GNNViz can assist non expert users to easily explore the relationship between graph topology and GNN's decision (Preserve), or to manipulate the prediction (Promote or Attack) for an image classification task on MS-COCO; Second, on the Pokec social network dataset, our framework can uncover unfairness and demographic biases; Lastly, it compares with state-of-the-art GNN explainer baseline on a synthetic dataset.
翻译:先前关于正式解释图形神经网络(GNN)的工作侧重于一个单一用途案例,通过确定重要边缘和节点来保存预测结果。 在本文中,我们开发了一个多功能解释框架,获取一个显示输入图的地形扰动的面具。我们将这个框架包入一个可以实现多种目的的互动可视化系统(GNViz ) : 保存、促进或攻击GNN的预测。 我们用三个案例研究来说明我们的方法的新颖性和有效性: 首先,GNNViz可以帮助非专家用户方便地探索图形表和GNN决定(预留)之间的关系,或者为MS-CO的图像分类任务操纵预测(促进或攻击); 其次,在Pokec社会网络数据集上,我们的框架可以发现不公平和人口偏差; 最后,它与合成数据集上最先进的GNNN解释器基准相比。