We present distributed algorithms for training dynamic Graph Neural Networks (GNN) on large scale graphs spanning multi-node, multi-GPU systems. To the best of our knowledge, this is the first scaling study on dynamic GNN. We devise mechanisms for reducing the GPU memory usage and identify two execution time bottlenecks: CPU-GPU data transfer; and communication volume. Exploiting properties of dynamic graphs, we design a graph difference-based strategy to significantly reduce the transfer time. We develop a simple, but effective data distribution technique under which the communication volume remains fixed and linear in the input size, for any number of GPUs. Our experiments using billion-size graphs on a system of 128 GPUs shows that: (i) the distribution scheme achieves up to 30x speedup on 128 GPUs; (ii) the graph-difference technique reduces the transfer time by a factor of up to 4.1x and the overall execution time by up to 40%
翻译:我们提出用于培训动态图形神经网络(GNN)的分布算法,用于在多节、多GPU系统中的大型图表中培训动态图形神经网络(GNN)的分布式算法。据我们所知,这是对动态GNN的首次规模研究。我们设计了减少GPU内存使用的机制,并确定了两个执行时间瓶颈:CPU-GPU数据传输和通信量。利用动态图形的特性,我们设计了基于差异的图形战略,以大大缩短传输时间。我们开发了一种简单而有效的数据分配技术,在这个技术下,通信量在输入大小上保持不变,直线性在任何数量 GPUS中。我们在128 GPUs系统中使用10亿尺寸的图表的实验显示:(一) 分配方案在128 GPUPS上达到最多30x速度;(二) 图形差异技术将传输时间降低到4.1x的系数,总执行时间减少到40%。