As Graph Neural Networks (GNNs) are increasingly being employed in critical real-world applications, several methods have been proposed in recent literature to explain the predictions of these models. However, there has been little to no work on systematically analyzing the reliability of these methods. Here, we introduce the first-ever theoretical analysis of the reliability of state-of-the-art GNN explanation methods. More specifically, we theoretically analyze the behavior of various state-of-the-art GNN explanation methods with respect to several desirable properties (e.g., faithfulness, stability, and fairness preservation) and establish upper bounds on the violation of these properties. We also empirically validate our theoretical results using extensive experimentation with nine real-world graph datasets. Our empirical results further shed light on several interesting insights about the behavior of state-of-the-art GNN explanation methods.
翻译:由于图质网络(GNN)越来越多地被用于重要的现实应用中,最近文献中提出了几种方法来解释这些模型的预测,然而,在系统分析这些模型的可靠性方面几乎没有甚至没有做任何工作。在这里,我们首次对最先进的GNN解释方法的可靠性进行了理论分析。更具体地说,我们从理论上分析了各种最先进的GNN解释方法在几种理想属性(如忠诚、稳定和公平保护)方面的行为,并确定了侵犯这些属性的上限。我们还利用九个真实世界图表数据集进行广泛的实验,对我们的理论结果进行了实证。我们的经验结果进一步揭示了对最新GNNN解释方法行为的一些有趣的见解。