We study distributed stochastic gradient (D-SG) method and its accelerated variant (D-ASG) for solving decentralized strongly convex stochastic optimization problems where the objective function is distributed over several computational units, lying on a fixed but arbitrary connected communication graph, subject to local communication constraints where noisy estimates of the gradients are available. We develop a framework which allows to choose the stepsize and the momentum parameters of these algorithms in a way to optimize performance by systematically trading off the bias, variance, robustness to gradient noise and dependence to network effects. When gradients do not contain noise, we also prove that distributed accelerated methods can \emph{achieve acceleration}, requiring $\mathcal{O}(\kappa \log(1/\varepsilon))$ gradient evaluations and $\mathcal{O}(\kappa \log(1/\varepsilon))$ communications to converge to the same fixed point with the non-accelerated variant where $\kappa$ is the condition number and $\varepsilon$ is the target accuracy. To our knowledge, this is the first acceleration result where the iteration complexity scales with the square root of the condition number in the context of \emph{primal} distributed inexact first-order methods. For quadratic functions, we also provide finer performance bounds that are tight with respect to bias and variance terms. Finally, we study a multistage version of D-ASG with parameters carefully varied over stages to ensure exact $\mathcal{O}(-k/\sqrt{\kappa})$ linear decay in the bias term as well as optimal $\mathcal{O}(\sigma^2/k)$ in the variance term. We illustrate through numerical experiments that our approach results in practical algorithms that are robust to gradient noise and that can outperform existing methods.
翻译:我们研究分布式梯度 (D-2/SG) 方法及其加速变方 (D-ASG), 以解决分权化的调制价( D- ASG), 解决在目标函数分布于多个计算单位时, 分布式优化优化问题, 位于固定但任意连接的通信图上, 受本地通信限制, 原因是对梯度的估算很吵; 我们开发了一个框架, 以便选择这些算法的阶梯度和动力参数, 从而通过系统交换偏差、 差异、 坚固度、 梯度噪音和网络效应的依赖性来优化性能。 当梯度不包含噪音时, 我们还证明, 分配加速法的方法可以\ emph{ { achevexxx 加速度, 需要$\ mathcal{O} (kapppa) 和 logral= rodeal- decretailation 。