We study the problem of distributed cooperative learning, where a group of agents seeks to agree on a set of hypotheses that best describes a sequence of private observations. In the scenario where the set of hypotheses is large, we propose a belief update rule where agents share compressed (either sparse or quantized) beliefs with an arbitrary positive compression rate. Our algorithm leverages a unified communication rule that enables agents to access wide-ranging compression operators as black-box modules. We prove the almost sure asymptotic exponential convergence of beliefs around the set of optimal hypotheses. Additionally, we show a non-asymptotic, explicit, and linear concentration rate in probability of the beliefs on the optimal hypothesis set. We provide numerical experiments to illustrate the communication benefits of our method. The simulation results show that the number of transmitted bits can be reduced to 5-10% of the non-compressed method in the studied scenarios.
翻译:我们研究了分散合作学习的问题, 一组代理商试图就一套最能描述私人观察序列的假设达成一致。 在一套假设规模大的假设中, 我们提出一个信念更新规则, 即代理商以任意正面压缩速率分享压缩( 稀有或量化) 的信念。 我们的算法运用一个统一的通信规则, 使代理商能够将广泛的压缩操作员作为黑箱模块。 我们证明, 一套最佳假设的信念几乎是无药可治的指数性融合。 此外, 我们展示了一种非无药可治的、 明确和线性集中率, 其概率就是对一套最佳假设的信念。 我们提供了数字实验, 以说明我们方法的通信效益。 模拟结果显示, 在研究的假设中, 传输的位数可以减少到非压缩方法的5-10% 。