Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems. This has numerous benefits, such as allowing applications of algorithms when preconditions are not satisfied, or reusing learned models when sufficient training data is not available or can't be generated. Unfortunately, a key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly. In this work, we address this limitation by applying existing work on concept-based explanations to GNN models. We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism. Using three case studies we demonstrate that: (i) our proposed model is capable of accurately learning concepts and extracting propositional formulas based on the learned concepts for each target class; (ii) our concept-based GNN models achieve comparative performance with state-of-the-art models; (iii) we can derive global graph concepts, without explicitly providing any supervision on graph-level concepts.
翻译:最近对图形神经网络模型的研究成功地将GNNs应用于古典图形算法和组合优化问题。这有许多好处,例如允许在不符合先决条件时应用算法,或者在没有足够的培训数据或无法产生足够的培训数据时重新使用所学模型。不幸的是,这些方法的一个主要障碍是缺乏解释性,因为GNNs是无法直接解释的黑盒模型。在这项工作中,我们通过将关于基于概念的解释的现有工作应用于GNN模型来解决这一局限性。我们引入了概念-bottleneck GNNNs,这依赖于对GNN的读出机制的修改。我们利用三个案例研究表明:(一) 我们提议的模型能够准确学习概念,并提取基于每个目标类别所学概念的推理公式;(二) 我们基于概念的GNNN模型能够与最新模型进行比较性能;(三) 我们可以生成全球图形概念,而没有明确地提供对图形级概念的任何监督。