Numerous subgraph-enhanced graph neural networks (GNNs) have emerged recently, provably boosting the expressive power of standard (message-passing) GNNs. However, there is a limited understanding of how these approaches relate to each other and to the Weisfeiler-Leman hierarchy. Moreover, current approaches either use all subgraphs of a given size, sample them uniformly at random, or use hand-crafted heuristics instead of learning to select subgraphs in a data-driven manner. Here, we offer a unified way to study such architectures by introducing a theoretical framework and extending the known expressivity results of subgraph-enhanced GNNs. Concretely, we show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations by relating them to the established $k\text{-}\mathsf{WL}$ hierarchy. In addition, we explore different approaches for learning to sample subgraphs using recent methods for backpropagating through complex discrete probability distributions. Empirically, we study the predictive performance of different subgraph-enhanced GNNs, showing that our data-driven architectures increase prediction accuracy on standard benchmark datasets compared to non-data-driven subgraph-enhanced graph neural networks while reducing computation time.
翻译:最近出现了许多亚字加固的图形神经网络(GNNs),这可以明显地增强标准(消息传递)GNNs的表达力。然而,对这些方法彼此之间以及与Weisfeiler-Leman等级之间的关系的理解有限。此外,目前的方法要么使用特定大小的所有子集,统一随机抽样,要么使用手工制作的超自然结构,而不是学习以数据驱动的方式选择子集。这里,我们提供了一个研究这类结构的统一方法,方法是引入理论框架,扩大子图加固的GNNs已知的表达力结果。具体地说,我们表明,增加子图的大小总是增加显示力,并通过将其与既定的 $k\ text{ ⁇ mathsf{WL}等级挂钩来更好地了解其局限性。此外,我们探索了不同的方法来学习抽样子集,利用最近的方法,通过复杂的离心概率分布进行反向调整。我们研究的是,在对比预测性性能的同时,我们比较了不同基准的G-NS型数据计算下基的精确度的亚结构,我们比较了不同的预测性地标定的G-图表结构。