In this paper, we study a distributed learning problem constrained by constant communication bits. Specifically, we consider the distributed hypothesis testing (DHT) problem where two distributed nodes are constrained to transmit a constant number of bits to a central decoder. In such cases, we show that in order to achieve the optimal error exponents, it suffices to consider the empirical distributions of observed data sequences and encode them to the transmission bits. With such a coding strategy, we develop a geometric approach in the distribution spaces and establish an inner bound of error exponent regions. In particular, we show the optimal achievable error exponents and coding schemes for the following cases: (i) both nodes can transmit $\log_23$ bits; (ii) one of the nodes can transmit $1$ bit, and the other node is not constrained; (iii) the joint distribution of the nodes are conditionally independent given one hypothesis. Furthermore, we provide several numerical examples for illustrating the theoretical results. Our results provide theoretical guidance for designing practical distributed learning rules, and the developed approach also reveals new potentials for establishing error exponents for DHT with more general communication constraints.
翻译:在本文中, 我们研究一个分布式的学习问题, 受到不断通信比特的限制。 具体地说, 我们考虑分布式的假设测试( DHT) 问题, 两个分布式的节点被迫向中央解码器传输恒定的比特数。 在这样的情况下, 我们显示一个分布式的假设测试( DHT) 问题, 两个分布式的节点都被迫向中央解码器传输恒定数的比特数。 在这样的情况下, 我们显示一个节点为了实现最佳的错误指数, 只需考虑观测到的数据序列的经验分布, 并将其编码为传输比特即可, 并把它们编码为传输比特即可。 有了这样一个编码战略, 我们开发了分布空间的几何方法, 并建立了错误指数区域的内圈。 我们的结果为设计实用的分布式学习规则提供了理论指导, 并且开发的方法也揭示了为DHT提供新的潜在差错演示限制。