In this article, we propose a new approach, optimize then agree for minimizing a sum $ f = \sum_{i=1}^n f_i(x)$ of convex objective functions over a directed graph. The optimize then agree approach decouples the optimization step and the consensus step in a distributed optimization framework. The key motivation for optimize then agree is to guarantee that the disagreement between the estimates of the agents during every iteration of the distributed optimization algorithm remains under any apriori specified tolerance; existing algorithms do not provide such a guarantee which is required in many practical scenarios. In this method, each agent during each iteration maintains an estimate of the optimal solution and, utilizes its locally available gradient information along with a finite-time approximate consensus protocol to move towards the optimal solution (hence the name Gradient-Consensus algorithm). We establish that the proposed algorithm has a global R-linear rate of convergence if the aggregate function $f$ is strongly convex and Lipschitz differentiable. We also show that under the relaxed assumption of $f_i$'s being convex and Lipschitz differentiable, the objective function error residual decreases at a Q-linear rate (in terms of the number of gradient computation steps) until it reaches a small value, which can be managed using the tolerance value specified on the finite-time approximate consensus protocol; no existing method in the literature has such strong convergence guarantees when $f_i$ are not necessarily strongly convex functions. The communication overhead for the improved guarantees on meeting constraints and better convergence of our algorithm is $O(k\log k)$ iterates in comparison to $O(k)$ of the traditional algorithms. Further, we numerically evaluate the performance of the proposed algorithm by solving a distributed logistic regression problem.


翻译:在此篇文章中, 我们提出一个新的方法, 优化后, 优化后, 同意以最小化美元 =\ sum ⁇ i=1 ⁇ n f_i(x) 以最小化 f= = = = = = = = = = = = = ⁇ n f_ i(x) 。 优化后, 优化后, 优化后, 将优化后, 优化后, 以及分布式优化框架中的协商一致步骤拆分。 优化后, 我们同意的是保证在分配式优化算法每次迭代期间, 代理商的估计数之间的分歧仍然处于任何优先规定的容忍度之下; 现有的算法不提供在许多实际情景中所需的这种保证。 在每个循环周期中, 每个代理商都会对最佳的递归正值做出估计, 并且使用固定的递增后, 递增后, 我们的递增后, 的递增后, 其递增后, 其递增后, 其递增后, 其递增后, 其递增后, 其递增后, 的递增后, 其递增后 的递增法 将更能 。

0
下载
关闭预览

相关内容

专知会员服务
123+阅读 · 2020年9月8日
Python分布式计算,171页pdf,Distributed Computing with Python
专知会员服务
107+阅读 · 2020年5月3日
MIT新书《强化学习与最优控制》
专知会员服务
275+阅读 · 2019年10月9日
已删除
将门创投
11+阅读 · 2019年4月26日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Arxiv
0+阅读 · 2021年6月15日
Arxiv
0+阅读 · 2021年6月7日
Arxiv
19+阅读 · 2020年7月13日
VIP会员
相关VIP内容
相关资讯
已删除
将门创投
11+阅读 · 2019年4月26日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Top
微信扫码咨询专知VIP会员