In training of modern large natural language processing (NLP) models, it has become a common practice to split models using 3D parallelism to multiple GPUs. Such technique, however, suffers from a high overhead of inter-node communication. Compressing the communication is one way to mitigate the overhead by reducing the inter-node traffic volume; however, the existing compression techniques have critical limitations to be applied for NLP models with 3D parallelism in that 1) only the data parallelism traffic is targeted, and 2) the existing compression schemes already harm the model quality too much. In this paper, we present Optimus-CC, a fast and scalable distributed training framework for large NLP models with aggressive communication compression. Optimus-CC differs from existing communication compression frameworks in the following ways: First, we compress pipeline parallel (inter-stage) traffic. In specific, we compress the inter-stage backpropagation and the embedding synchronization in addition to the existing data-parallel traffic compression methods. Second, we propose techniques to avoid the model quality drop that comes from the compression. We further provide mathematical and empirical analyses to show that our techniques can successfully suppress the compression error. Lastly, we analyze the pipeline and opt to selectively compress those traffic lying on the critical path. This further helps reduce the compression error. We demonstrate our solution on a GPU cluster, and achieve superior speedup from the baseline state-of-the-art solutions for distributed training without sacrificing the model quality.
翻译:在培训现代大型自然语言处理(NLP)模型方面,将3D平行模式与多个GPU分开的模式已成为一种常见的做法。但是,这种技术受节点间通信的高压影响。压缩通信是通过减少节点间交通量来减少间接费用的一种方法;然而,现有的压缩技术对NLP模式适用3D平行模式有重大限制,即(1) 仅针对数据平行交通,(2) 现有的压缩计划已经对模型质量造成太大的损害。在本文件中,我们为大型NLP模式展示了快速和可扩展的分布式培训框架,具有强烈的通信压缩作用。Optimus-CC与现有的通信压缩框架不同:第一,我们压缩管道平行(跨阶段)交通流量。具体地说,我们压缩跨阶段的后向调整和嵌入同步模式,以及现有的数据平行交通流量压缩方法。第二,我们提出避免模型质量下降的技术。我们进一步提供数学和实验性分析,以便最终展示我们压低的G级交通误差。我们进一步进行数学和实验性分析。我们进一步分析。