In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. In order to cope with communication constraints, we propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates before communicating with their neighbors. The analysis shows that, under some general conditions on the quantization noise, and for sufficiently small step-sizes $\mu$, the strategy is stable both in terms of mean-square error and average bit rate: by reducing $\mu$, it is possible to keep the estimation errors small (on the order of $\mu$) without increasing indefinitely the bit rate as $\mu\rightarrow 0$. Simulations illustrate the theoretical findings and the effectiveness of the proposed approach, revealing that decentralized learning is achievable at the expense of only a few bits.
翻译:在本文中,我们考虑了分散化优化问题,即代理商有个别成本功能以最大限度地降低其估算值,但需受亚空间限制,这些限制要求整个网络的最小化者留在低维子空间中。这种限制的提法包括作为特殊情况的共识或单一任务优化,并允许采用多任务平滑和组合优化等更一般的任务相关模型。为了应对通信限制,我们建议并研究一种适应性分散化战略,即代理商在与邻居沟通之前使用不同随机化的量化器来压缩其估算值。分析表明,在量化噪音的某些一般条件下,对于足够小的分级大小的 $\ mu$ 来说,这一战略在平均差错和平均比特率方面是稳定的:通过减少$\ mu$,可以将估算误差保持小(按 $\ mu$ 排序), 而不会无限期地将比特率提高到 $\ mu\ rightrow 0. 0 。模拟说明了拟议的方法的理论结论和有效性,表明分散化学习可以实现,而只花费了几小部分。