Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner. Recently, GNNs have been receiving increased attention in machine learning and data mining communities because of the higher performance they achieve in various tasks, including graph classification, link prediction, and recommendation. However, the complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions. To handle the interpretability issues, recently, various GNN explanation methods have been proposed. In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs using the Hilbert-Schmidt independence criterion (HSIC), which captures the nonlinear dependency between two variables through kernels. More specifically, we extend the GraphLIME method for node explanation with a group lasso and a fused lasso-based node explanation method. The group and fused regularization with GraphLIME enables the interpretation of GNNs in substructure units. Then, we show that the proposed approach can be used for the explanation of sequential graph classification tasks. Through experiments, it is demonstrated that our method can identify crucial structures in a target graph in various settings.
翻译:图形神经网络( GNNS) 是一种神经模型, 以端到端的方式处理图形任务。 最近, GNNS在机器学习和数据开采社区受到越来越多的关注, 因为他们在各种任务( 包括图表分类、 链接预测和建议) 中取得了更高的业绩。 然而, GNNS 的复杂动态使得很难理解图形特征的哪些部分对预测贡献更大。 为了处理可解释性问题, 最近提出了各种 GNN 解释方法。 在本研究中, 提出了一个灵活的模型, 以使用 Hilbert- Schmidt 独立性标准( HSIC) 来探测图表中的重要结构。 该标准通过内核圈捕捉到两个变量之间的非线性依赖性。 更具体地说, 我们扩展了图形LIME 方法, 以便用一个 lasso 组和 以 lasso 连接的节点解释方法来进行无偏向解释。 与 GapgLIME 的组合和结合的正规化解释方法使得能在亚结构单位中对 GNNS 进行解释。 然后, 我们显示拟议的方法可用于在关键图表结构结构结构结构结构结构中解释。 通过实验, 。 通过该方法可以确定一个方法, 。