Finding optimal message quantization is a key requirement for low complexity belief propagation (BP) decoding. To this end, we propose a floating-point surrogate model that imitates quantization effects as additions of uniform noise, whose amplitudes are trainable variables. We verify that the surrogate model closely matches the behavior of a fixed-point implementation and propose a hand-crafted loss function to realize a trade-off between complexity and error-rate performance. A deep learning-based method is then applied to optimize the message bitwidths. Moreover, we show that parameter sharing can both ensure implementation-friendly solutions and results in faster training convergence than independent parameters. We provide simulation results for 5G low-density parity-check (LDPC) codes and report an error-rate performance within 0.2 dB of floating-point decoding at an average message quantization bitwidth of 3.1 bits. In addition, we show that the learned bitwidths also generalize to other code rates and channels.
翻译:找到最佳信息量化是低复杂度信仰传播(BP)解码的关键要求。 为此,我们提议了一个浮点代孕模型, 模拟量化效应作为统一噪音的增加, 其振幅是可训练的变量。 我们核实替代模型与固定点执行行为密切匹配, 并提出手工制作的损失函数, 以实现复杂度和错误率性能之间的权衡。 然后, 应用一个深层次的学习方法来优化信息位宽。 此外, 我们显示, 参数共享可以确保实施友好的解决方案, 并导致比独立参数更快的培训趋同。 我们为 5G 低密度对等检查( LDPC) 代码提供模拟结果, 并在0. 2 dB 中报告浮点解码的差速性表现, 平均信息量分位数为3.1 位。 此外, 我们显示, 所学的位宽度也向其他代码率和频道概括。