Motivated by understanding and analysis of large-scale machine learning under heavy-tailed gradient noise, we study decentralized optimization with gradient clipping, i.e., in which certain clipping operators are applied to the gradients or gradient estimates computed from local nodes prior to further processing. While vanilla gradient clipping has proven effective in mitigating the impact of heavy-tailed gradient noise in non-distributed setups, it incurs bias that causes convergence issues in heterogeneous distributed settings. To address the inherent bias introduced by gradient clipping, we develop a smoothed clipping operator, and propose a decentralized gradient method equipped with an error feedback mechanism, i.e., the clipping operator is applied on the difference between some local gradient estimator and local stochastic gradient. We consider strongly convex and smooth local functions under symmetric heavy-tailed gradient noise that may not have finite moments of order greater than one. We show that the proposed decentralized gradient clipping method achieves a mean-square error (MSE) convergence rate of $O(1/t^\delta)$, $\delta \in (0, 2/5)$, where the exponent $\delta$ is independent of the existence of higher order gradient noise moments $\alpha > 1$ and lower bounded by some constant dependent on condition number. To the best of our knowledge, this is the first MSE convergence result for decentralized gradient clipping under heavy-tailed noise without assuming bounded gradient. Numerical experiments validate our theoretical findings.
翻译:暂无翻译