We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs. In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models. In the absence of input node features however, just as GNNs are limited by the Weisfeiler-Lehman isomorphism test, c-GNNs will be severely limited on simple random graph models. For instance, they will fail to distinguish the communities of a well-separated Stochastic Block Model (SBM) with constant degree function. Thus, we consider recently proposed architectures that augment GNNs with unique node identifiers, referred to as Structural GNNs here (SGNNs). We study the convergence of SGNNs to their continuous counterpart (c-SGNNs) in the large random graph limit, under new conditions on the node identifiers. We then show that c-SGNNs are strictly more powerful than c-GNNs in the continuous limit, and prove their universality on several random graph models of interest, including most SBMs and a large class of random geometric graphs. Our results cover both permutation-invariant and permutation-equivariant architectures.
翻译:我们研究了图形神经网络(GNNs)对潜在位置随机图的近似能量。 在大图表限制中,GNNs已知会与某些称为c-GNNs的“连续”模型汇合,这直接使得能够研究随机图形模型的近似能量。然而,由于缺乏输入节点功能,正如GNNs受到Weisfeiler-Lehman等地貌测试的限制,c-GNNs将严格限制在简单的随机图形模型上。例如,在大图表限制中,GNNs将无法区分具有恒定度功能的精密分离的软块模型(SBM)社区。因此,我们考虑最近提出的结构,这些结构以独特的节点识别器来增强GNNes的近似能量。我们研究的是SGNNs与连续对应方(c-SGNNes)的随机图形限制下,在节点识别器的新条件下,C-SGNNSNs将受到严重限制,在连续限制中比c-GNNNS的严格来说比Cs更强大强。我们最近提出的结构结构结构,包括每张的随机图形模型。