The message passing-based graph neural networks (GNNs) have achieved great success in many real-world applications. However, training GNNs on large-scale graphs suffers from the well-known neighbor explosion problem, i.e., the exponentially increasing dependencies of nodes with the number of message passing layers. Subgraph-wise sampling methods -- a promising class of mini-batch training techniques -- discard messages outside the mini-batches in backward passes to avoid the neighbor explosion problem at the expense of gradient estimation accuracy. This poses significant challenges to their convergence analysis and convergence speeds, which seriously limits their reliable real-world applications. To address this challenge, we propose a novel subgraph-wise sampling method with a convergence guarantee, namely Local Message Compensation (LMC). To the best of our knowledge, LMC is the {\it first} subgraph-wise sampling method with provable convergence. The key idea of LMC is to retrieve the discarded messages in backward passes based on a message passing formulation of backward passes. By efficient and effective compensations for the discarded messages in both forward and backward passes, LMC computes accurate mini-batch gradients and thus accelerates convergence. We further show that LMC converges to first-order stationary points of GNNs. Experiments on large-scale benchmark tasks demonstrate that LMC significantly outperforms state-of-the-art subgraph-wise sampling methods in terms of efficiency.
翻译:传递信息、图象神经网络(GNNS)在许多现实世界应用中取得了巨大成功。然而,在大型图表上培训GNNS,却受到众所周知的邻居爆炸问题的影响,即节点与传递信息层数量成倍增长的依附性,即节点与传递信息层数目成倍增长的依附性。从地谱取样方法 -- -- 一个很有希望的小型批量培训技术的类别 -- -- 将信息丢弃在后流小桶之外,以避免邻居爆炸问题,而牺牲梯度估计准确性。这对其趋同分析和趋同速度提出了重大挑战,严重限制了它们可靠的真实世界应用。为了应对这一挑战,我们提出了一个新的子图谱取抽样方法,并附有汇合保证,即本地信息补偿(LMC),根据本地信息补偿(LMC),在前向后向后传递信息时,我们用精确的缩略图方法,从而在GMMS(G)下标度的基点上显示大规模趋同。