This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
翻译:本文提出了分布式Halpern Peaceman--Rachford(dHPR)方法,这是一种用于求解具有非光滑目标函数的分布式凸复合优化问题的高效算法,其在Karush--Kuhn--Tucker残差方面实现了非遍历的$O(1/k)$迭代复杂度。通过利用对称Gauss--Seidel分解,dHPR有效地解耦了目标函数和一致性约束中的线性算子,同时保持了可并行性并避免了额外的大近端项,从而实现了具有可证明快速收敛性的去中心化实现。dHPR的优越性能通过在分布式LASSO、分组LASSO和$L_1$正则化逻辑回归问题上的全面数值实验得到了验证。