While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.
翻译:虽然对GNN的实验性解释是一个研究周全的问题,因为许多方法正在得到研究,为GNN的行为提供全球解释,尽管它具有解释和调试的潜力,但全球解释对于GNN行为提供全球解释的可能性要小得多。 现有的解决方案要么简单列出某一类的当地解释,要么生成一个合成的原型图,给某一类提供最大分,完全没有GNN可以学到的任何组合性内容。 在这项工作中,我们提议GLGExtralainer(全球逻辑性解释,基于GNN),这是第一个全球解释者,能够以任意的Boolean组合来解释所学的图形概念。 GLGExplainer是一个完全不同的结构,它把当地解释作为投入,将其结合成一个逻辑公式,而不是图形概念,作为当地解释的组合。 与现有的解决方案相反,GLGExplainer提供了精确和人间互换的全球解释,这些解释与地面解释(关于合成数据)完全吻合,或者与现有的域知识(关于真实世界数据)匹配。 所摘录的公式忠实于模型预测,对于模型的公式是忠实于模型预测的,通过不定期的GNNG分析工具,可以提供有希望的G的诊断性G。