In decentralized optimization, it is common algorithmic practice to have nodes interleave (local) gradient descent iterations with gossip (i.e. averaging over the network) steps. Motivated by the training of large-scale machine learning models, it is also increasingly common to require that messages be {\em lossy compressed} versions of the local parameters. In this paper, we show that, in such compressed decentralized optimization settings, there are benefits to having {\em multiple} gossip steps between subsequent gradient iterations, even when the cost of doing so is appropriately accounted for e.g. by means of reducing the precision of compressed information. In particular, we show that having $O(\log\frac{1}{\epsilon})$ gradient iterations {with constant step size} - and $O(\log\frac{1}{\epsilon})$ gossip steps between every pair of these iterations - enables convergence to within $\epsilon$ of the optimal value for smooth non-convex objectives satisfying Polyak-\L{}ojasiewicz condition. This result also holds for smooth strongly convex objectives. To our knowledge, this is the first work that derives convergence results for nonconvex optimization under arbitrary communication compression.
翻译:在分散化优化中,通常的算法做法是使用八卦(即平均在网络中)步骤进行节点间断(本地)梯度下降迭代,使用八卦(即平均在网络中)步骤。在大规模机器学习模型培训的推动下,要求信息为本地参数的版本 {melessy 压缩。在本文中,我们显示,在这种压缩分散化优化设置中,在随后的梯度迭代之间采取 emmusial polication 步骤有好处,即使适当计算出这样做的成本,例如通过降低压缩信息的精确度。特别是,我们显示,如果使用美元(log\\ frac{1\ unpselon}) $ 的梯度迭代,{具有恒定的步大小}- 和 $O(log\ frac{1\ unepsilon} 等值,则有利于随后的梯度迭代相之间的流的流步骤,即使这样做的成本是适当的计算,例如通过降低压缩信息精确度的方法。特别是,我们显示,我们有美元(log\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\