Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful features from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon -- the limit object of a graph -- in this paper we consider the problem of learning a graphon neural network (WNN) -- the limit object of a GNN -- by training GNNs on graphs sampled Bernoulli from the graphon. Under smoothness conditions, we show that: (i) the expected distance between the learning steps on the GNN and on the WNN decreases asymptotically with the size of the graph, and (ii) when training on a sequence of growing graphs, gradient descent follows the learning direction of the WNN. Inspired by these results, we propose a novel algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training. This algorithm is benchmarked on both a recommendation system and a decentralized control problem where it is shown to retain comparable performance, to its large-scale counterpart, at a reduced computational cost.
翻译:图形神经网络(GNNs) 使用图解组合来利用网络变量,并从网络数据中学习有意义的特征。然而,在大比例图形变化的预期距离上,计算成本较高,导致可缩放限制。本文中,我们通过在图解中抽样的Bernoulli图中培训GNNs,来研究图形神经网络(WNN) -- -- GNN的极限对象 -- -- 来研究GNNs的问题。在平滑条件下,我们显示:(一) GNN的学习步骤和WNNN的学习步骤之间预期距离,与图形的大小不同步下降,导致可缩放限制。以及(二) 当关于增长图形序列的培训时,渐变的下降会跟随WNNN的学习方向。我们根据这些结果,提出一种新的算法,从适度的节点开始,连续增加图表的大小。在培训期间,这种算法以一个大型推荐系统和一个分散控制问题作为基准,显示在降低成本的对等点上保持可比较性。