We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs. In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models. In the absence of input node features however, just as GNNs are limited by the Weisfeiler-Lehman isomorphism test, c-GNNs will be severely limited on simple random graph models. For instance, they will fail to distinguish the communities of a well-separated Stochastic Block Model (SBM) with constant degree function. Thus, we consider recently proposed architectures that augment GNNs with unique node identifiers, sometimes referred to as Graph Wavelets Neural Networks (GWNNs). We study the convergence of GWNNs to their continuous counterpart (c-GWNNs) in the large random graph limit, under new conditions on the node identifiers. We then show that c-GWNNs are strictly more powerful than c-GNNs in the continuous limit, and prove their universality on several random graph models of interest, including most SBMs and a large class of random geometric graphs. Our results cover both permutation-invariant and permutation-equivariant architectures.
翻译:我们研究了图形神经网络(GNNs)对潜在位置随机图的近似能量。 在大图表限制中, GNNs已知会与某些称为c-GNNs的“连续”模型汇合,这直接使得能够研究随机图形模型的近似能量。然而,由于缺乏输入节点功能,正如GNNs受到Weisfeiler-Lehman异形测试的限制,c-GNNS将严格限制在简单的随机图形模型上。例如,它们将无法区分具有恒定度功能的精密分离的软块模型(SBM)社区。因此,我们考虑最近提出的结构,这些结构以独特的节点识别器来增强GNNes的近似能量。有时被称为GWNNS。我们研究了GNNs在大随机图限制下与连续对应方(c-GWNNNS)的趋同点的趋同点。在节点识别点的新条件下,我们随后表明,C-GWNNNNNS严格来说比c-GNS型块模型更强大, 以及我们每个连续的随机的图形模型都有兴趣。