Training deep neural networks on large datasets containing high-dimensional data requires a large amount of computation. A solution to this problem is data-parallel distributed training, where a model is replicated into several computational nodes that have access to different chunks of the data. This approach, however, entails high communication rates and latency because of the computed gradients that need to be shared among nodes at every iteration. The problem becomes more pronounced in the case that there is wireless communication between the nodes (i.e. due to the limited network bandwidth). To address this problem, various compression methods have been proposed including sparsification, quantization, and entropy encoding of the gradients. Existing methods leverage the intra-node information redundancy, that is, they compress gradients at each node independently. In contrast, we advocate that the gradients across the nodes are correlated and propose methods to leverage this inter-node redundancy to improve compression efficiency. Depending on the node communication protocol (parameter server or ring-allreduce), we propose two instances of the LGC approach that we coin Learned Gradient Compression (LGC). Our methods exploit an autoencoder (i.e. trained during the first stages of the distributed training) to capture the common information that exists in the gradients of the distributed nodes. We have tested our LGC methods on the image classification and semantic segmentation tasks using different convolutional neural networks (ResNet50, ResNet101, PSPNet) and multiple datasets (ImageNet, Cifar10, CamVid). The ResNet101 model trained for image classification on Cifar10 achieved an accuracy of 93.57%, which is lower than the baseline distributed training with uncompressed gradients only by 0.18%.
翻译:在包含高维数据的大型数据集上培训深神经网络需要大量计算。 解决这个问题的解决方案是数据平行分布培训, 将模型复制到能够访问数据不同块数的数个计算节点中。 然而, 这种方法需要高通信率和延迟度, 因为计算出需要在每个迭代的节点之间共享的梯度。 问题在节点之间出现无线通信( 即由于101网络带宽有限) 的情况下变得更加明显。 为了解决这个问题, 已经提出了各种压缩方法, 包括宽度、 量度和梯度的加密网络。 现有方法利用了内部信息冗余, 也就是说, 每个节点的渐变率是独立的。 相反, 我们主张, 节点之间的梯度是相关联的, 并提议了方法, 利用节点间节点的节点冗余能力来提高调效率。 取决于节点通信协议( 平流服务器或环流压), 我们提出了两种 LGC 方法, 我们用磁性递增的图像结构, 我们使用了一个普通的平级的平流 。