While message passing Graph Neural Networks (GNNs) have become increasingly popular architectures for learning with graphs, recent works have revealed important shortcomings in their expressive power. In response, several higher-order GNNs have been proposed that substantially increase the expressive power, albeit at a large computational cost. Motivated by this gap, we explore alternative strategies and lower bounds. In particular, we analyze a new recursive pooling technique of local neighborhoods that allows different tradeoffs of computational cost and expressive power. First, we prove that this model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs. Second, we show how recursive pooling can exploit sparsity to reduce the computational complexity compared to the existing higher-order GNNs. More generally, we provide a (near) matching information-theoretic lower bound for counting subgraphs with graph representations that pool over representations of derived (sub-)graphs. We also discuss lower bounds on time complexity.
翻译:虽然传递信息的图形神经网络(GNNs)已成为越来越受欢迎的图表学习架构,但最近的作品揭示了其表达力的重大缺陷。作为回应,一些高层次的GNNs被提议大幅增加表达力,尽管计算成本很大。受这一差距的驱动,我们探索了替代策略和较低界限。特别是,我们分析了当地邻里一种新的累回集技术,允许对计算成本和表达力进行不同的权衡。首先,我们证明这一模型可以计算大小为$k$的子图,从而克服已知的低级GNNs的局限性。第二,我们展示了循环式集合如何利用宽度来降低计算复杂性,而与现有的较高级GNNs相比。更一般地说,我们提供了一种(近)匹配信息理论的较低界限,用于计算子图和图表的对比,将衍生(子)图的表达方式集中在一起。我们还讨论了时间复杂性的较低界限。