We propose a divide-and-conquer (DAC) algorithm for constrained convex optimization over networks, where the global objective is the sum of local objectives attached to individual agents. The algorithm is fully distributed: each iteration solves local subproblems around selected fusion centers and coordinates only with neighboring fusion centers. Under standard assumptions of smoothness, strong convexity, and locality on the objective function, together with polynomial growth conditions on the underlying graph, we establish exponential convergence of the DAC iterations and derive explicit bounds for both exact and inexact local solvers. Numerical experiments on three representative losses ($L_2$ distance, quadratic, and entropy) confirm the theory and demonstrate scalability and effectiveness.
翻译:本文提出了一种用于网络约束凸优化的分治算法,其中全局目标函数由各智能体局部目标函数之和构成。该算法完全分布式:每次迭代在选定的融合中心附近求解局部子问题,且仅与相邻融合中心进行协调。在目标函数满足光滑性、强凸性及局部性标准假设,且底层图满足多项式增长条件的前提下,我们证明了DAC迭代的指数收敛性,并为精确与不精确局部求解器分别推导出显式界。基于三种典型损失函数($L_2$距离、二次函数及熵函数)的数值实验验证了理论结果,并证明了算法的可扩展性与有效性。