Communication compression is a crucial technique for modern distributed learning systems to alleviate their communication bottlenecks over slower networks. Despite recent intensive studies of gradient compression for data parallel-style training, compressing the activations for models trained with pipeline parallelism is still an open problem. In this paper, we propose AC-SGD, a novel activation compression algorithm for communication-efficient pipeline parallelism training over slow networks. Different from previous efforts in activation compression, instead of compressing activation values directly, AC-SGD compresses the changes of the activations. This allows us to show, to the best of our knowledge for the first time, that one can still achieve $O(1/\sqrt{T})$ convergence rate for non-convex objectives under activation compression, without making assumptions on gradient unbiasedness that do not hold for deep learning models with non-linear activation functions.We then show that AC-SGD can be optimized and implemented efficiently, without additional end-to-end runtime overhead.We evaluated AC-SGD to fine-tune language models with up to 1.5 billion parameters, compressing activations to 2-4 bits.AC-SGD provides up to 4.3X end-to-end speed-up in slower networks, without sacrificing model quality. Moreover, we also show that AC-SGD can be combined with state-of-the-art gradient compression algorithms to enable "end-to-end communication compression: All communications between machines, including model gradients, forward activations, and backward gradients are compressed into lower precision.This provides up to 4.9X end-to-end speed-up, without sacrificing model quality.
翻译:通信压缩是现代分布式学习系统的关键技术,以缓解在较慢的网络中传播的通信瓶颈。 尽管最近对数据平行式培训的梯度压缩进行了密集研究,但压缩受管道平行论训练的模型的启动仍然是一个尚未解决的问题。 在本文中,我们提出AC-SGD,这是用于通信高效管道平行化培训的新型激活压缩压缩算法,与以前在激活压缩而不是直接压缩激活值的努力不同,AC-SGD压缩了激活值的变化。这使我们能够根据我们的知识,首次显示对数据平行式培训的梯度压缩进行梯度压缩的密集研究,压缩受管道平行平行平行论训练的模型的启动速度仍然是美元($(1/sqrt{T}),而对于在激活过程中经过编织的非金字节线平行论目标的模型的启动率($$ $( 1/\ sqrality-rality-rality-rality-ral-lation-ral-ral-ral-ral-ral-ral-ral-ral-lation-ral-lation-ral-lation-lation-lation-lation-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) 提供这一模型的升级和不向前向前向前端端端端端的升级速度, 和前向前向前向后显示的升级和后向前向后向后向前向后向后向后推进速度展示的升级速度,向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后展示,向后向后推进速度展示,向后向后向后向后向后展示,提供,再展示,提供后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向后向</s>