Graph Convolutional Networks (GCNs) have achieved impressive empirical advancement across a wide variety of graph-related applications. Despite their great success, training GCNs on large graphs suffers from computational and memory issues. A potential path to circumvent these obstacles is sampling-based methods, where at each layer a subset of nodes is sampled. Although recent studies have empirically demonstrated the effectiveness of sampling-based methods, these works lack theoretical convergence guarantees under realistic settings and cannot fully leverage the information of evolving parameters during optimization. In this paper, we describe and analyze a general \textbf{\textit{doubly variance reduction}} schema that can accelerate any sampling method under the memory budget. The motivating impetus for the proposed schema is a careful analysis for the variance of sampling methods where it is shown that the induced variance can be decomposed into node embedding approximation variance (\emph{zeroth-order variance}) during forward propagation and layerwise-gradient variance (\emph{first-order variance}) during backward propagation. We theoretically analyze the convergence of the proposed schema and show that it enjoys an $\mathcal{O}(1/T)$ convergence rate. We complement our theoretical results by integrating the proposed schema in different sampling methods and applying them to different large real-world graphs. Code is public available at~\url{https://github.com/CongWeilin/SGCN.git}.
翻译:相形形色色相形形色网络(GCNs)在与图形相关的各种应用中取得了令人印象深刻的经验进步。尽管在大图表上培训GCNs取得了巨大成功,但是在计算和记忆上都有问题。绕过这些障碍的潜在途径是基于取样的方法,每个层都有一组节点抽样。虽然最近的研究从经验上证明了基于取样的方法的有效性,但这些工程在现实环境下缺乏理论趋同保证,无法充分利用在优化过程中不断变化的参数的信息。在本文件中,我们描述和分析了一种能够加速记忆预算下任何取样方法的一般的Textbf{textif{CNblittle differation_ schemaa。提议的Schemaa的动力是对取样方法差异的仔细分析,其中显示,在前方传播和层相色调差异(\emph{第一个级差异}}在后向传播期间,诱导变的参数(\emph{cordoal) 差异。我们从理论上分析拟议的Schema_ 的趋同,并显示在公共采样率中采用不同的数字。