We propose a framework that automatically transforms non-scalable GNNs into precomputation-based GNNs which are efficient and scalable for large-scale graphs. The advantages of our framework are two-fold; 1) it transforms various non-scalable GNNs to scale well to large-scale graphs by separating local feature aggregation from weight learning in their graph convolution, 2) it efficiently executes precomputation on GPU for large-scale graphs by decomposing their edges into small disjoint and balanced sets. Through extensive experiments with large-scale graphs, we demonstrate that the transformed GNNs run faster in training time than existing GNNs while achieving competitive accuracy to the state-of-the-art GNNs. Consequently, our transformation framework provides simple and efficient baselines for future research on scalable GNNs.
翻译:我们建议一个框架,将非可扩缩的GNN自动转换成基于预先计算的GNN,对于大型图表而言,这种框架既有效又可扩缩。我们框架的优点是双重的;1)通过将本地特性聚合与在图形变速中学习重量区分,将各种不可扩缩的GNN转化为大比例图,从而将无法扩缩的GNN自动转换成基于预先计算的GNNNN。2)通过用大比例图进行广泛的实验,我们证明,经过改造的GNNN在培训时间比现有的GNN更快,同时使最先进的GNN达到竞争性的精确度。因此,我们的改造框架为今后对可扩缩的GNNM的研究提供了简单而有效的基准。