The diamond relay channel, where a source communicates with a destination via two parallel relays, is one of the canonical models for cooperative communications. We focus on the primitive variant, where each relay observes a noisy version of the source signal and forwards a compressed description over an orthogonal, noiseless, finite-rate link to the destination. Compress-and-forward (CF) is particularly effective in this setting, especially under oblivious relaying where relays lack access to the source codebook. While neural CF methods have been studied in single-relay channels, extending them to the two-relay case is non-trivial, as it requires fully distributed compression without any inter-relay coordination. We demonstrate that learning-based quantizers at the relays can harness input correlations by operating remote, yet in a collaborative fashion, enabling effective distributed compression in line with Berger-Tung-style coding. Each relay separately compresses its observation using a one-shot learned quantizer, and the destination jointly decodes the source message. Simulation results show that the proposed scheme, trained end-to-end with finite-order modulation, operates close to the known theoretical bounds. These results demonstrate that neural CF can scale to multi-relay systems while maintaining both performance and interpretability.
翻译:钻石中继信道是协作通信的经典模型之一,其中信源通过两个并行中继与目的地通信。本文聚焦其原始变体:每个中继观测到含噪的信源信号,并通过正交、无噪、有限速率的链路向目的地转发压缩描述。在此场景下,压缩转发策略尤为有效,尤其是在中继无法获取信源码本的无感知中继模式下。虽然神经压缩转发方法已在单中继信道中得到研究,但将其扩展至双中继场景具有显著挑战,因为这需要完全分布式压缩且中继间无需任何协调。我们证明,中继端的基于学习的量化器能够通过远程但协作的方式利用输入相关性,实现符合Berger-Tung式编码的有效分布式压缩。每个中继使用单次学习的量化器独立压缩其观测值,目的地则联合解码信源消息。仿真结果表明,所提出的方案通过有限阶调制进行端到端训练,其性能接近已知的理论界限。这些结果证实神经压缩转发能够扩展到多中继系统,同时保持性能与可解释性。