In this work we develop a new method, named locally permutation-equivariant graph neural networks, which provides a framework for building graph neural networks that operate on local node neighbourhoods, through sub-graphs, while using permutation equivariant update functions. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs through restricted representations. In addition, we prove that there is no loss of expressivity by using restricted representations. Furthermore, the proposed framework only requires a choice of $k$-hops for creating sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods.
翻译:在这项工作中,我们开发了一种新方法,名为本地变异-等异图形神经网络,它提供了一个框架,用于通过子图,在使用变异和等异性更新功能的同时,建立在地方节点街区运作的图形神经网络;电文传递神经网络的表达力和最近处理方法都受到限制,要么缺乏可缩放性,要么要求将结构信息编码到功能空间中。这里介绍的总框架克服了通过限制表述方式操作子图时与全球变异相关的可缩放性问题。此外,我们还证明,使用限制的表述方式不会丧失表达性。此外,拟议的框架仅要求选择$k-hops来创建子图和为每个层选择代表空间,这就使得该方法在基于特征的域中易于应用。我们实验性地验证了一系列图表基准分类任务中的可缩放性方法,显示的是状态和艺术结果,或者在所有基准上非常有竞争力的结果。此外,我们证明,使用本地更新功能的记忆在GPU中可以大大改进。