Graph Neural Networks (GNNs) are a popular class of machine learning models. Inspired by the learning to explain (L2X) paradigm, we propose L2XGNN, a framework for explainable GNNs which provides faithful explanations by design. L2XGNN learns a mechanism for selecting explanatory subgraphs (motifs) which are exclusively used in the GNNs message-passing operations. L2XGNN is able to select, for each input graph, a subgraph with specific properties such as being sparse and connected. Imposing such constraints on the motifs often leads to more interpretable and effective explanations. Experiments on several datasets suggest that L2XGNN achieves the same classification accuracy as baseline methods using the entire input graph while ensuring that only the provided explanations are used to make predictions. Moreover, we show that L2XGNN is able to identify motifs responsible for the graph's properties it is intended to predict.
翻译:神经网络图( GNN) 是一个受欢迎的机器学习模型类别。 在学习解释( L2X) 范式的启发下, 我们提议 L2XGNNN, 是一个可以解释的 GNNS 框架, 用设计来提供忠实的解释。 L2XGNN 学习了选择解释性子图( motifs) 的机制, 这些子图专门用于 GNNS 的电文传输操作。 L2XGNN 能够为每个输入图选取一个子图, 包含具体的属性, 如分散和连接。 对motifs 施加这种限制往往导致更可解释和有效的解释性解释。 对几个数据集的实验显示, L2XGNN 实现了与使用整个输入图的基线方法相同的分类准确性, 同时确保只使用所提供的解释来进行预测。 此外, 我们显示 L2XGNN 能识别用于它要预测的图形属性的模型。