Decentralized algorithms solve multi-agent problems over a connected network, where the information can only be exchanged with the accessible neighbors. Though there exist several decentralized optimization algorithms, there are still gaps in convergence conditions and rates between decentralized and centralized algorithms. In this paper, we fill some gaps by considering two decentralized algorithms: EXTRA and NIDS. They both converge linearly with strongly convex objective functions. We will answer two questions regarding them. What are the optimal upper bounds for their stepsizes? Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones? More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions.
翻译:分散式算法解决了连接网络上的多代理问题, 信息只能与可访问的邻居交换。 尽管存在一些分散式优化算法, 分散式算法和集中式算法之间在趋同条件和利率方面仍然存在差距。 在本文中, 我们通过考虑两种分散式算法来填补一些差距: Extra 和NIDS。 它们都线性地结合了强烈的 contrex 客观功能。 我们将回答两个问题。 它们的阶梯化的最佳上限是哪些? 分散式算法要求线性趋同功能的属性大于集中式的参数? 更具体地说, 我们放宽了两种算法线性趋同条件。 对于 EXTRA, 我们表明, 阶梯化与集中式算法的相似。 对于NIDS, 阶梯化的上限与集中式算法完全相同。 此外, 我们放松了对目标函数和混合矩阵的要求。 我们为最弱的两种算法提供了线性趋一致的结果 。