Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.
翻译:用于代表图形的图形神经网络(GNNs)大致遵循一个周边聚合框架,在此框架下,节点的表示矢量通过相邻节点的递归和变换特性矢量来计算。许多GNN变量已经提出,并且在节点和图形分类任务方面都取得了最先进的结果。然而,尽管GNNs将图形表达学习革命化,但对其代表性属性和局限性的了解有限。在这里,我们提出了一个理论框架,用于分析GNNs在捕捉不同图形结构方面的表达力。我们的结果体现了GNN变量的流行性能,例如Greab Convolutional 网络和GreagraphSAGE,表明它们无法学会区分某些简单的图形结构。我们随后开发了一个简单的结构,在GNNS类别中最能表达,而且像Weisfeiler-Lehman图形的形态测试一样强大。我们从经验上验证了我们关于一些图表分类基准的理论结论,并表明我们的模型达到了最先进的性能。