Decentralized optimization, particularly the class of decentralized composite convex optimization (DCCO) problems, has found many applications. Due to ubiquitous communication congestion and random dropouts in practice, it is highly desirable to design decentralized algorithms that can handle stochastic communication networks. However, most existing algorithms for DCCO only work in networks that are deterministically connected during bounded communication rounds, and therefore cannot be extended to stochastic networks. In this paper, we propose a new decentralized dual averaging (DDA) algorithm that can solve DCCO in stochastic networks. Under a rather mild condition on stochastic networks, we show that the proposed algorithm attains global linear convergence if each local objective function is strongly convex. Our algorithm substantially improves the existing DDA-type algorithms as the latter were only known to converge sublinearly prior to our work. The key to achieving the improved rate is the design of a novel dynamic averaging consensus protocol for DDA, which intuitively leads to more accurate local estimates of the global dual variable. To the best of our knowledge, this is the first linearly convergent DDA-type decentralized algorithm and also the first algorithm that attains global linear convergence for solving DCCO in stochastic networks. Numerical results are also presented to support our design and analysis.
翻译:分散式优化,特别是分散式复合组合优化(DCCO)问题,已经发现许多应用。由于通信拥堵和实践中随机辍学现象普遍存在,非常可取的做法是设计能够处理随机通信网络的分散式算法。然而,DCCO的大多数现有算法只在封闭式通信回合中具有决定性联系的网络中工作,因此不能扩大到随机网络。在本文件中,我们提议一种新的分散式双向平均(DDAD)算法,可以在随机网络中解决DCCO。在随机网络中比较温和的条件下,我们显示,如果每个本地目标功能都具有很强的连接性,拟议的算法将实现全球线性趋同。我们的算法大大改进了现有的DADA型算法,因为后者只是在我们工作之前才知道分流式连接起来,因此,因此,改进率的关键是为DADADA设计一种新的动态平均一致协议,它直截然地导致对全球双重变量进行更准确的本地估计。我们最了解的是,这是第一个向DADA-DA网络提供的直线性趋同式组合分析,同时也是用于实现我们全球数字趋同式的DADADA和DADADA-DAR型的直线性分析。