In this paper, we propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers. The proposed algorithm, Censored and Quantized Generalized GADMM (CQ-GGADMM), leverages the worker grouping and decentralized learning ideas of Group Alternating Direction Method of Multipliers (GADMM), and pushes the frontier in communication efficiency by extending its applicability to generalized network topologies, while incorporating link censoring for negligible updates after quantization. We theoretically prove that CQ-GGADMM achieves the linear convergence rate when the local objective functions are strongly convex under some mild assumptions. Numerical simulations corroborate that CQ-GGADMM exhibits higher communication efficiency in terms of the number of communication rounds and transmit energy consumption without compromising the accuracy and convergence speed, compared to the censored decentralized ADMM, and the worker grouping method of GADMM.
翻译:在本文中,我们提出一个高效的分散式通信机器学习框架,解决一个由相互连接的工人网络界定的共识优化问题; 拟议的算法,即对通用的GADMM(CQ-GGADMM)进行检查和量化,利用乘数小组交替方向方法(GADMM)的工人分组和分散式学习想法,并通过将其适用性扩大到通用网络地形,同时纳入对量化后微不足道的更新的审查,推动通信效率前沿; 我们理论上证明,当本地目标功能在某些轻度假设下大大融合时,CQ-GGADMMM(CQ-GADM)实现了线性趋同率; 数字模拟证实,与受审查的分散式ADMMM(ADM)和GADMM(GADM)的工人分组方法相比,CQ-GADMM(GADM)的通信效率在不降低准确性和趋同速度的情况下,传播能源消耗。