This paper considers solving distributed optimization problems in peer-to-peer multi-agent networks. The network is synchronous and connected. By using the proportional-integral (PI) control strategy, various algorithms with fixed stepsize have been developed. Two notable among them are the PI algorithm and the PI consensus algorithm. Although the PI algorithm has provable linear or exponential convergence without the standard requirement of (strong) convexity, a similar guarantee for the PI consensus algorithm is unavailable. In this paper, using Lyapunov theory, we guarantee exponential convergence of the PI consensus algorithm for global cost functions that satisfy the restricted secant inequality, with rate-matching discretization, without requiring convexity. To accelerate the PI consensus algorithm, we incorporate local pre-conditioning in the form of constant positive definite matrices and numerically validate its efficiency compared to the prominent distributed convex optimization algorithms. Unlike classical pre-conditioning, where only the gradients are multiplied by a pre-conditioner, the proposed pre-conditioning modifies both the gradients and the consensus terms, thereby controlling the effect of the communication graph on the algorithm.
翻译:暂无翻译