Subgraph-enhanced graph neural networks (SGNN) can increase the expressive power of the standard message-passing framework. This model family represents each graph as a collection of subgraphs, generally extracted by random sampling or with hand-crafted heuristics. Our key observation is that by selecting "meaningful" subgraphs, besides improving the expressivity of a GNN, it is also possible to obtain interpretable results. For this purpose, we introduce a novel framework that jointly predicts the class of the graph and a set of explanatory sparse subgraphs, which can be analyzed to understand the decision process of the classifier. We compare the performance of our framework against standard subgraph extraction policies, like random node/edge deletion strategies. The subgraphs produced by our framework allow to achieve comparable performance in terms of accuracy, with the additional benefit of providing explanations.
翻译:子图增强图神经网络 (SGNN) 可以增加标准消息传递框架的表达能力。这个模型的族群将每个图形表示为子图的集合,通常通过随机抽样或手工制定的启发式来提取。我们的关键观察是,通过选择“有意义”的子图,除了提高 GNN 的表达能力外,还可以获得可解释的结果。为此,我们介绍了一个新的框架,它共同预测图的类和一组解释性稀疏子图,可以分析这些子图来理解分类器的决策过程。我们将我们的框架的性能与标准子图提取策略进行比较,例如随机节点/边缘删除策略。我们的框架生成的子图允许在准确性方面实现可比较的性能,还提供额外的解释的好处。