The performance and efficiency of distributed training of Deep Neural Networks highly depend on the performance of gradient averaging among all participating nodes, which is bounded by the communication between nodes. There are two major strategies to reduce communication overhead: one is to hide communication by overlapping it with computation, and the other is to reduce message sizes. The first solution works well for linear neural architectures, but latest networks such as ResNet and Inception offer limited opportunity for this overlapping. Therefore, researchers have paid more attention to minimizing communication. In this paper, we present a novel gradient compression framework derived from insights of real gradient distributions, and which strikes a balance between compression ratio, accuracy, and computational overhead. Our framework has two major novel components: sparsification of gradients in the frequency domain, and a range-based floating point representation to quantize and further compress gradients frequencies. Both components are dynamic, with tunable parameters that achieve different compression ratio based on the accuracy requirement and systems' platforms, and achieve very high throughput on GPUs. We prove that our techniques guarantee the convergence with a diminishing compression ratio. Our experiments show that the proposed compression framework effectively improves the scalability of most popular neural networks on a 32 GPU cluster to the baseline of no compression, without compromising the accuracy and convergence speed.
翻译:深神经网络分布式培训的性能和效率高度取决于所有参与节点之间平均坡度的性能和效率,这取决于所有节点之间的通信。有两个主要战略可以降低通信管理费用:一个是隐藏通信,与计算重叠,另一个是缩小信息大小。第一个解决方案对线性神经结构运作良好,但是,ResNet和Inception等最新网络为这种重叠提供了有限的机会。因此,研究人员更加注意尽量减少通信。在本文中,我们提出了一个新的梯度压缩框架,它来自真实梯度分布的洞察,在压缩率、准确度和计算间接费用之间保持平衡。我们的框架有两个新的主要组成部分:频率域梯度的宽度,以及一个基于范围的浮动点代表,以便量化和进一步压缩梯度频率。这两个组成部分都是动态的,根据准确性要求和系统平台实现不同的压缩率,并在GPUs上实现非常高的吞吐量。我们证明我们的技术保证了与不断缩小压缩比率的趋同。我们进行的实验显示,最接近的压缩性能框架没有改进最接近性能的G的精确度。