Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data. Based on matrix multiplications, convolutions incur in high computational costs leading to scalability limitations in practice. To overcome these limitations, proposed methods rely on training GNNs in smaller number of nodes, and then transferring the GNN to larger graphs. Even though these methods are able to bound the difference between the output of the GNN with different number of nodes, they do not provide guarantees against the optimal GNN on the very large graph. In this paper, we propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon. We propose to grow the size of the graph as we train, and we show that our proposed methodology -- learning by transference -- converges to a neighborhood of a first order stationary point on the graphon data. A numerical experiment validates our proposed approach.
翻译:图形神经网络(GNNs) 依靠图形变异来利用网络数据中有意义的模式。 基于矩阵乘法, 变异在计算成本上产生高成本, 导致实际的可缩放性限制。 为了克服这些限制, 提议的方法依赖于对GNNs进行较少的节点培训, 然后将GNN转换为更大的图形。 尽管这些方法能够用不同数目的节点来将GN的输出差异捆绑起来, 但它们并不能提供保证, 对抗在非常大的图表中的最佳GNN。 在本文中, 我们提议通过利用生长图的序列的极限对象, 即图形, 来在非常大的图表中学习 GNNs 。 我们提议的方法在培训时增加图形的大小, 并显示我们提议的方法 -- 通过传输学习 -- 与图形数据上的第一个定点的相邻点相匹配。 数字实验验证了我们提议的方法 。