In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.
翻译:在本文中,我们研究了网络中分布式混凝土优化问题的最佳趋同率。我们将网络对通信的限制作为一套同系物限制的模型,并为四种不同的设置提供了最佳的复杂度,即:函数$F(\xb)\triangleq\sum ⁇ i=1 ⁇ m}f_i(\xb)$是强烈的共和和顺畅的,或者强烈的同系物,或者顺畅,或者只是顺流。我们的结果显示,Nesterov在双重问题上加速的梯度下降可以以分布方式执行,并获得与集中版问题相同的最佳速度(直至常数或对数因素),与互动矩阵的光谱差距有关的额外费用。最后,我们讨论拟议设置的一些扩展,如准轴功能、时间变化图、条件数的改进。