Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data. Recently, there has been increased interest in designing GNN that scales to large graphs. Most existing methods use "graph sampling" or "layer-wise sampling" techniques to reduce training time. However, these methods still suffer from degrading performance and scalability problems when applying to graphs with billions of edges. This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes. Theoretical analysis shows that GBP is the first method that achieves sub-linear time complexity for both the precomputation and the training phases. An extensive empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time. Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1.8 billion edges in less than half an hour on a single machine.
翻译:神经网络(GNN)是一个新兴的学习非欧洲语言数据的领域。 最近,人们越来越有兴趣设计GNN, 以大图表为尺度。 多数现有方法使用“ 绘图抽样” 或“ 层次抽样” 技术来缩短培训时间。 然而,这些方法在应用数十亿边缘的图表时仍然有降低性能和可缩放性的问题。 本文展示了英镑, 一个可缩放的GNN, 利用地物矢量和培训/测试节点的局部双向传播过程。 理论分析显示, 英镑是第一个在计算前和培训阶段达到亚线性时间复杂性的方法。 一项广泛的实证研究表明, 英镑在培训/测试时间上达到最先进的性能, 最明显的是, 英镑可以在一个超过6000万个节点和18亿边缘的图形上提供优异性性性能, 在不到半小时的单一机器上达到18亿边缘。