In this work we develop a new method, named Sub-graph Permutation Equivariant Networks (SPEN), which provides a framework for building graph neural networks that operate on sub-graphs, while using a base update function that is permutation equivariant, that are equivariant to a novel choice of automorphism group. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating more locally on sub-graphs. In addition, through operating on sub-graphs the expressive power of higher-dimensional global permutation equivariant networks is improved; this is due to fact that two non-distinguishable graphs often contain distinguishable sub-graphs. Furthermore, the proposed framework only requires a choice of $k$-hops for creating ego-network sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating statistically indistinguishable results from the state-of-the-art on six out of seven benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods.
翻译:在这项工作中,我们开发了一种新的方法,名为子图变异网络(SPEN),它提供了一个框架,用于建设以子图操作的图形神经网络,同时使用一个基更新功能,即变异等异性,与新选择的自动变异性组等同。电传神经网络的表情力量和最近的方法都受到限制,因此,要么缺乏可缩放性,要么要求将结构信息编码到功能空间。这里介绍的总框架克服了与全球变异性神经网络相关的可缩放性问题,通过在子图上更本地操作。此外,通过子图操作,高度全球变异性网络的显性力量得到改进;这是因为,两个不易变异的图表往往含有可辨别的子图。此外,拟议的框架只需要选择 $k-hows 来创建自联网子图子图和选择可显示空间的可缩放空间,以便通过子图显示一个可应用的G级模型的大小,从而在每一个图表上展示一个可应用的基数的基数矩阵上,从而演示一个可应用的基数的基数的基数的基数的基数的基域。