Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e. by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
翻译:图形神经网络( GNN) 是预测图形结构数据的一种受欢迎的方法。 由于 GNNs 将输入图紧紧缠在神经网络结构中, 通用的可解释的AI 方法并不适用。 在很大程度上, GNNs 至今仍是用户的黑箱。 在本文中, 我们显示, GNNs 实际上可以自然地使用更高级的扩展来解释, 即通过确定有助于共同预测的边缘群来解释。 实际上, 我们发现, 这种解释可以使用嵌套的归属方案来解析。 在这种方案中, 现有的技术, 如分层关联性传播( LRP) 可以在每一个步骤中应用。 输出是收集与预测相关的输入图的行迹。 我们用 GNN- LRP 表示的新的解释方法, 适用于广泛的图形神经网络, 并让我们从文字数据、 量子化学 的结构- 与 图像分类 的情感分析中提取实际相关的见解。