We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates. Our results shorten, streamline and strengthen existing convergence proofs for several asynchronous optimization methods and allow us to establish convergence guarantees for popular algorithms that were thus far lacking a complete theoretical understanding. Specifically, we use our results to derive better iteration complexity bounds for proximal incremental aggregated gradient methods, to obtain tighter guarantees depending on the average rather than maximum delay for the asynchronous stochastic gradient descent method, to provide less conservative analyses of the speedup conditions for asynchronous block-coordinate implementations of Krasnoselskii-Mann iterations, and to quantify the convergence rates for totally asynchronous iterations under various assumptions on communication delays and update rates.
翻译:我们引入了异步迭代的新收敛结果,这些结果出现在并行和分布式优化算法的分析中。这些结果易于应用,并为异步迭代的收敛速度提供了显式的估计。我们的结果缩短、简化并加强了现有的几种异步优化方法的收敛证明,并使我们能够为几种流行算法建立完整的理论理解,这些算法迄今为止缺乏完整的理论理解。具体而言,我们使用我们的结果为近端增量聚合梯度方法导出更好的迭代复杂度上界,为异步随机梯度下降方法提供基于平均延迟而非最大延迟的更严格保证,提供了异步块坐标实现Krasnoselskii-Mann迭代的加速条件的更紧的分析,并量化了在通信延迟和更新率有不同假设的情况下异步迭代的收敛速度。