In this paper, we study a distributed learning problem constrained by constant communication bits. Specifically, we consider the distributed hypothesis testing (DHT) problem where two distributed nodes are constrained to transmit a constant number of bits to a central decoder. In such cases, we show that in order to achieve the optimal error exponents, it suffices to consider the empirical distributions of observed data sequences and encode them to the transmission bits. With such a coding strategy, we develop a geometric approach in the distribution spaces and characterize the optimal schemes. In particular, we show the optimal achievable error exponents and coding schemes for the following cases: (i) both nodes can transmit $\log_23$ bits; (ii) one of the nodes can transmit $1$ bit, and the other node is not constrained; (iii) the joint distribution of the nodes are conditionally independent given one hypothesis. Furthermore, we provide several numerical examples for illustrating the theoretical results. Our results provide theoretical guidance for designing practical distributed learning rules, and the developed approach also reveals new potentials for establishing error exponents for DHT with more general communication constraints.
翻译:在本文中, 我们研究一个分布式学习问题, 受到不断通信比特的限制。 具体地说, 我们考虑分布式的假设测试( DHT) 问题, 两个分布式的节点被迫向中央解码器传输恒定数位数的比特。 在这种情况下, 我们证明, 为了实现最佳误差指数, 只需考虑观测到的数据序列的经验分布, 并将其编码为传输比特即可。 有了这样一个编码战略, 我们开发了一个分布式分布式空间的几何方法, 并给最佳方案定性。 特别是, 我们为以下案例展示了最佳的可实现错误推算和编码方案:( 一) 两个节点可以传输$\log_ 23$位元元元元元元元元元元元元;(二) 一个节点可以传输1美元位元元, 另一个节点不受限制;(三) 联合分配节点是有条件的独立的, 给出了一个假设。 此外, 我们提供了几个数字例子来说明理论结果。 我们的结果为设计实用的分布式学习规则提供了理论指导, 开发方法还揭示了为DHT提供新的可能性。