This paper proposes a distributed algorithm for a network of agents to solve an optimization problem with separable objective function and locally coupled constraints. Our strategy is based on reformulating the original constrained problem as the unconstrained optimization of a smooth (continuously differentiable) exact penalty function. Computing the gradient of this penalty function in a distributed way is challenging even under the separability assumptions on the original optimization problem. Our technical approach shows that the distributed computation problem for the gradient can be formulated as a system of linear algebraic equations defined by separable problem data. To solve it, we design an exponentially fast, input-to-state stable distributed algorithm that does not require the individual agent matrices to be invertible. We employ this strategy to compute the gradient of the penalty function at the current network state. Our distributed algorithmic solver for the original constrained optimization problem interconnects this estimation with the prescription of having the agents follow the resulting direction. Numerical simulations illustrate the convergence and robustness properties of the proposed algorithm.
翻译:本文为一个代理商网络提出一个分布式算法, 以解决优化问题, 使用分解客观功能和本地结合的制约。 我们的战略是重新定义最初的受限问题, 因为它是平滑( 连续的) 准确的罚款功能的不受限制的优化。 将这一罚款函数的梯度以分布式方式计算, 即使根据原始优化问题的分解假设, 也是很困难的。 我们的技术方法显示, 梯度的分布式计算问题可以形成一个线性代数方程系统, 由分解的问题数据定义。 为了解决这个问题, 我们设计了一个指数化的快速、 输入到州的稳定分布式算法, 不需要单个代理商矩阵不可忽略。 我们使用这个策略来计算当前网络状态下罚款函数的梯度。 我们分配的原始受限优化问题的算法求解器将这一估计与代理商遵循所产生方向的处方相连接。 数字模拟显示了拟议算法的趋同性和坚固性。