The Graph Convolutional Network (GCN) has been successfully applied to many graph-based applications. Training a large-scale GCN model, however, is still challenging: Due to the node dependency and layer dependency of the GCN architecture, a huge amount of computational time and memory is required in the training process. In this paper, we propose a parallel and distributed GCN training algorithm based on the Alternating Direction Method of Multipliers (ADMM) to tackle the two challenges simultaneously. We first split GCN layers into independent blocks to achieve layer parallelism. Furthermore, we reduce node dependency by dividing the graph into several dense communities such that each of them can be trained with an agent in parallel. Finally, we provide solutions for all subproblems in the community-based ADMM algorithm. Preliminary results demonstrate that our proposed community-based ADMM training algorithm can lead to more than triple speedup while achieving the best performance compared with state-of-the-art methods.
翻译:图表革命网络(GCN)已成功地应用于许多基于图表的应用。但是,培训大规模GCN模型仍然具有挑战性:由于GCN结构的节点依赖性和多层依赖性,在培训过程中需要大量的计算时间和记忆。在本文件中,我们提议以“乘数交替方向法(ADMM)”为基础,同时使用平行和分散的GCN培训算法来应对这两个挑战。我们首先将GCN层分成独立的块,以实现层平行。此外,我们通过将图表分为几个密集的社区来减少节点依赖性,这样每个社区都可以同时接受代理人的培训。最后,我们为基于社区的ADMM算法中的所有子问题提供了解决办法。初步结果显示,我们提议的基于社区的ADMM培训算法可以带来超过三倍的加速,同时取得与最新方法相比的最佳业绩。