Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov's accelerated gradient descent (Nesterov, 1983, 2004) and Adam (Kingma and Ba, 2014). In order to combine the benefits of communication compression and convergence acceleration, we propose a \emph{compressed and accelerated} gradient method based on ANITA (Li, 2021) for distributed optimization, which we call CANITA. Our CANITA achieves the \emph{first accelerated rate} $O\bigg(\sqrt{\Big(1+\sqrt{\frac{\omega^3}{n}}\Big)\frac{L}{\epsilon}} + \omega\big(\frac{1}{\epsilon}\big)^{\frac{1}{3}}\bigg)$, which improves upon the state-of-the-art non-accelerated rate $O\left((1+\frac{\omega}{n})\frac{L}{\epsilon} + \frac{\omega^2+\omega}{\omega+n}\frac{1}{\epsilon}\right)$ of DIANA (Khaled et al., 2020) for distributed general convex problems, where $\epsilon$ is the target error, $L$ is the smooth parameter of the objective, $n$ is the number of machines/devices, and $\omega$ is the compression parameter (larger $\omega$ means more compression can be applied, and no compression implies $\omega=0$). Our results show that as long as the number of devices $n$ is large (often true in distributed/federated learning), or the compression $\omega$ is not very high, CANITA achieves the faster convergence rate $O\Big(\sqrt{\frac{L}{\epsilon}}\Big)$, i.e., the number of communication rounds is $O\Big(\sqrt{\frac{L}{\epsilon}}\Big)$ (vs. $O\big(\frac{L}{\epsilon}\big)$ achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds).
翻译:由于分布式和联合学习的通信成本高,依赖压缩通信的方法越来越受欢迎。 此外,在理论上和实际上最精美的梯度型方法总是依赖某种形式的加速/动作来减少通信数量(加速趋同),例如Nesterov的加速梯度下降(Nesterov,1983年,2004年)和Adam(Kingma和Ba,2014年)。为了结合通信压缩和趋同加速的好处,我们提议基于ANITA(Li,2021)的平稳度调整方法(Li,2021),我们称之为CANITA。我们的CANITA实现了某种加速/动作(emph{第一个加速率)的加速/动作,例如Nesterov的加速梯度下降(Nesterov,1983年,2004年)和Adam(Litesil)的加速度下降率 +(Omicial) 数字(Orcial2) 和Orcial-serma(K)的快速变速(Oi-rma_al_al_al_al_al_al_al_al_al_al_al_al_al_alma_al_al_al_al_al_al_al_al_al_al_al_al_al_als dismaxxxxxxx dislislax lax lax) lax lax lax lax lax mo lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax la lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax lax la la