In this work we develop a new method, named Sub-graph Permutation Equivariant Networks (SPEN), which provides a framework for building graph neural networks that operate on sub-graphs, while using permutation equivariant update functions that are also equivariant to a novel choice of automorphism groups. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs. In addition, through operating on sub-graphs the expressive power of higher-dimensional global permutation equivariant networks is improved; this is due to fact that two non-distinguishable graphs often contain distinguishable sub-graphs. Furthermore, the proposed framework only requires a choice of $k$-hops for creating ego-network sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods.
翻译:在这项工作中,我们开发了一个新的方法,名为Subphraphy Permodation Equivariant Networks(SPEN),它提供了一个框架,用于建设以子图操作的图形神经网络,同时使用对新选择的自变组合体组来说也是不同的。电传神经网络的表达力被证明受到限制,而最近要克服的方法则是缺乏可缩放性,或要求将结构信息编码到功能空间。这里介绍的总框架克服了与以子图操作方式操作的全球变异等性相关的可缩放性问题。此外,通过在子图上操作,高维全球变异变异网络的表达力得到改进;这是因为两个非易变异的图表往往包含可辨别的子图表。此外,拟议的框架只需要选择$k-hops(hops)来创建自我网络更新子图集子图集,以及选择用于每个层的代表性空间,从而使得高维度全球变变变等网络的直径矩阵功能得到改进;我们通过子图显示一个非常有竞争力的图表,我们用来进一步展示了所有基准域域域的图表结果。