Motivated by understanding and analysis of large-scale machine learning under heavy-tailed gradient noise, we study distributed optimization with gradient clipping, i.e., in which certain clipping operators are applied to the gradients or gradient estimates computed from local clients prior to further processing. While vanilla gradient clipping has proven effective in mitigating the impact of heavy-tailed gradient noises in non-distributed setups, it incurs bias that causes convergence issues in heterogeneous distributed settings. To address the inherent bias introduced by gradient clipping, we develop a smoothed clipping operator, and propose a distributed gradient method equipped with an error feedback mechanism, i.e., the clipping operator is applied on the difference between some local gradient estimator and local stochastic gradient. We establish that, for the first time in the strongly convex setting with heavy-tailed gradient noises that may not have finite moments of order greater than one, the proposed distributed gradient method's mean square error (MSE) converges to zero at a rate $O(1/t^\iota)$, $\iota \in (0, 1/2)$, where the exponent $\iota$ stays bounded away from zero as a function of the problem condition number and the first absolute moment of the noise and, in particular, is shown to be independent of the existence of higher order gradient noise moments $\alpha > 1$. Numerical experiments validate our theoretical findings.
翻译:暂无翻译