Communication compression has become a key strategy to speed up distributed optimization. However, existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms. They are unsatisfactory in terms of convergence rate, stability, and the capability to handle heterogeneous data. Motivated by primal-dual algorithms, this paper proposes the first \underline{L}in\underline{EA}r convergent \underline{D}ecentralized algorithm with compression, LEAD. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error, and we provide the first consensus error bound in such settings without assuming bounded gradients. Experiments on convex problems validate our theoretical analysis, and empirical study on deep neural nets shows that LEAD is applicable to non-convex problems.
翻译:通信压缩已成为加速分配优化的关键策略。 但是, 现有的分散算法已经成为加速分配优化的关键策略。 现有的分散算法, 压缩主要侧重于压缩 DD型算法。 它们在趋同率、稳定性和处理不同数据的能力方面不尽如人意。 本文在原始双向算法的驱动下, 提出了第一个包含压缩的下线{ L}in\ underline{EA}r 集合\ underline{ D} 集中算法。 我们的理论描述了不精确的原始更新和双重更新以及压缩错误的结合动态, 我们提供了在这种环境中的首个一致错误, 而没有假定受绑定的梯度。 有关 convex 问题的实验证实了我们的理论分析, 对深神经网的经验研究表明, LEAD 适用于非cionx 问题 。