In this work, a deep learning-based quantization scheme for log-likelihood ratio (L-value) storage is introduced. We analyze the dependency between the average magnitude of different L-values from the same quadrature amplitude modulation (QAM) symbol and show they follow a consistent ordering. Based on this we design a deep autoencoder that jointly compresses and separately reconstructs each L-value, allowing the use of a weighted loss function that aims to more accurately reconstructs low magnitude inputs. Our method is shown to be competitive with state-of-the-art maximum mutual information quantization schemes, reducing the required memory footprint by a ratio of up to two and a loss of performance smaller than 0.1 dB with less than two effective bits per L-value or smaller than 0.04 dB with 2.25 effective bits. We experimentally show that our proposed method is a universal compression scheme in the sense that after training on an LDPC-coded Rayleigh fading scenario we can reuse the same network without further training on other channel models and codes while preserving the same performance benefits.
翻译:在此工作中,引入了一个基于深学习的对日志相似率(L-value)存储量的量化方案。 我们分析了同一四面形调制符号(QAM)不同L值平均值之间的依赖性,并显示它们遵循一致的顺序。 在此基础上,我们设计了一个深自动编码器,共同压缩和分别重建每个L值,从而使用加权损失函数,以更准确地重建低数值输入量。 我们的方法已证明与最先进的最大相互信息量化方案具有竞争力,将所需的记忆足印率降低至2倍,并导致性能损失小于0.1 dB,每个L值小于2个有效位或小于0.04 dB, 有效比重为2.25。 我们实验性地表明,我们拟议的方法是一种通用压缩计划,即在对LDPC编码的Rayleiele 淡化假设进行了培训之后,我们可以在不对其他频道模型和代码进行进一步培训的情况下再利用同一网络,同时保留同样的性能效益。