This paper considers the decentralized composite optimization problem. We propose a novel decentralized variance-reduction proximal-gradient algorithmic framework, called PMGT-VR, which is based on a combination of several techniques including multi-consensus, gradient tracking, and variance reduction. The proposed framework relies on an imitation of centralized algorithms and we demonstrate that algorithms under this framework achieve convergence rates similar to that of their centralized counterparts. We also describe and analyze two representative algorithms, PMGT-SAGA and PMGT-LSVRG, and compare them to existing state-of-the-art proximal algorithms. To the best of our knowledge, PMGT-VR is the first linearly convergent decentralized stochastic algorithm that can solve decentralized composite optimization problems. Numerical experiments are provided to demonstrate the effectiveness of the proposed algorithms.
翻译:本文探讨了分散化的综合优化问题。 我们提出一个新的分散化的减少差异预测-梯度算法框架,称为PMGT-VR,它基于多种技术的组合,包括多共识、梯度跟踪和差异减少。 拟议的框架依赖于对集中化算法的仿照,我们证明在这个框架内的算法达到了与其集中化对应方相似的趋同率。 我们还描述和分析两种具有代表性的算法,即PMDGT-SAGA和PMGT-LSVRG, 并将其与现有最先进的近效算法进行比较。 据我们所知, PMGT-VR是第一个可以解决分散化的复合优化问题的线性趋同式分散式随机算法。 我们提供了数字实验,以证明拟议的算法的有效性。