We study the problem of distributed cooperative learning, where a group of agents seek to agree on a set of hypotheses that best describes a sequence of private observations. In the scenario where the set of hypotheses is large, we propose a belief update rule where agents share compressed (either sparse or quantized) beliefs with an arbitrary positive compression rate. Our algorithm leverages a unified and straightforward communication rule that enables agents to access wide-ranging compression operators as black-box modules. We prove the almost sure asymptotic exponential convergence of beliefs around the set of optimal hypotheses. Additionally, we show a non-asymptotic, explicit, and linear concentration rate in probability of the beliefs on the optimal hypothesis set. We provide numerical experiments to illustrate the communication benefits of our method. The simulation results show that the number of transmitted bits can be reduced to 5-10% of the non-compressed method in the studied scenarios.
翻译:我们研究了分散合作学习的问题, 一组代理商试图就一套最能描述私人观察序列的假设达成一致。 在一套假设规模大的情况下, 我们提出一个信念更新规则, 即代理商以任意正面压缩速率分享压缩( 稀有或量化) 的信仰。 我们的算法运用一个统一和直截了当的通信规则, 使代理商能够将广泛的压缩操作员作为黑盒模块进入。 我们证明在一套最佳假设上, 几乎可以肯定的是, 各种信仰是无药可乐的指数性趋同。 此外, 我们展示了一种非无药可乐的、 明确和线性集中率, 以最佳假设设定的概率为概率。 我们提供数字实验来说明我们方法的通信效益。 模拟结果显示, 在所研究的假设中, 传输的比特数可以减少到5-10%的非压缩方法的5%-10% 。