Graph neural networks (GNNs) are deep convolutional architectures consisting of layers composed by graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from network data. However, training them requires matrix computations which can be expensive for large graphs. To address this limitation, we investigate the ability of GNNs to be transferred across graphs. We consider graphons, which are both graph limits and generative models for weighted and stochastic graphs, to define limit objects of graph convolutions and GNNs -- graphon convolutions and graphon neural networks (WNNs) -- which we use as generative models for graph convolutions and GNNs. We show that these graphon filters and WNNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Using these results, we then derive error bounds for transferring graph filters and GNNs across such graphs. These bounds show that transferability increases with the graph size, and reveal a tradeoff between transferability and spectral discriminability which in GNNs is alleviated by the pointwise nonlinearities. These findings are further verified empirically in numerical experiments in movie recommendation and decentralized robot control.
翻译:图形神经网络(GNNS) 是由图形变数和点向非线性非线性组成的层层组成的深相结构。 由于这些变数和稳定性特性, GNNS能够成功地从网络数据中学习演示。 然而, 培训它们需要对大图表进行昂贵的矩阵计算。 为了应对这一限制, 我们调查GNNs的能力, 以便通过图形过滤器和从这些图中取样的GNNs在加权和图解图形中进行传输。 我们认为, 图表是加权和图解图形的图形限制和归别模型, 以界定图形变数和GNNNS的有限对象 -- -- 图形变数和图解神经网络(WNNNS) -- -- 我们用它们作为图形变数和GNNNS的基因模型模型模型模型模型。 我们显示, 这些图形过滤器和网点的计算方法可以用图形过滤器和图解在加权和图解图解图形中进行传输的GNNF值限制。 这些边框显示可转移性随着图表的缩度的缩度的缩略性而增加, 并揭示了GNNNFSBSBSV的缩度分析结果。