In this work we derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem. Agents employ the recently proposed ACTC (adapt-compress-then-combine) diffusion strategy, where the signals exchanged locally by neighboring agents are encoded with randomized differential compression operators. We provide a detailed characterization of the mean-square estimation error, which is shown to comprise a term related to the error that agents would achieve without communication constraints, plus a term arising from compression. The analysis reveals quantitative relationships between the compression loss and fundamental attributes of the distributed regression problem, in particular, the stochastic approximation error caused by the gradient noise and the network topology (through the Perron eigenvector). We show that knowledge of such relationships is critical to allocate optimally the communication resources across the agents, taking into account their individual attributes, such as the quality of their data or their degree of centrality in the network topology. We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents. Illustrative examples show that a significant performance improvement, as compared to a blind (i.e., uniform) resource allocation, can be achieved by optimizing the allocation by means of the provided mean-square-error formulas.
翻译:在这项工作中,我们推导了一组分布式代理人的性能,他们在通信约束和适应性下解决回归问题。代理人使用最近提出的 ACTC (adapt-compress-then-combine) 扩散策略,在这种策略中,相邻代理人之间交换的信号使用随机化差分压缩运算符进行编码。我们提供了一个详细的均方估计误差的表征,其中,均方误差被显示为一个与没有通信约束的代理人所达到的误差相关的项,再加上一个来自压缩的项。分析揭示了分布式回归问题压缩丢失和基本属性之间的定量关系,特别是由梯度噪音和网络拓扑 (通过Perron特征向量)引起的随机逼近误差。我们展示了这些关系的认识对于将通信资源最优地分配到代理人之间至关重要,考虑到他们的个人属性,例如他们的数据质量或网络拓扑中心度。我们设计了一种优化的分配策略,其中用于优化的参数可以由代理学习在线获得。实例说明表明,通过通过提供的均方误差公式优化分配,在性能方面可以实现显著的提高,与盲目 (即均匀)的资源分配相比。