It is known belief propagation decoding variants of LDPC codes can be unrolled easily as neural networks after assigning differed weights to message passing edges flexibly. In this paper we focus on how to determine these weights, in the form of trainable paramters, within a framework of deep learning. Firstly, a new method is proposed to generate high-quality training data via exploiting an approximation to the targeted mixture density. Then the strong positive correlation between training loss and decoding metrics is fully exposed after tracing the training evolution curves. Lastly, for the purpose of facilitating training convergence and reducing decoding complexity, we highlight the necessity of slashing the number of trainable parameters while emphasizing the locations of these survived ones, which is justified in the extensive simulation.
翻译:众所周知,LDPC编码的传播解码变体在给信息传递边缘赋予不同重量之后很容易作为神经网络而解码。在本文中,我们侧重于如何在深层学习的框架内,以可训练的副手的形式,确定这些重量。首先,建议采用新方法,利用与目标混合密度的近似值,生成高质量的培训数据。然后,在追踪培训演变曲线之后,培训损失和解码指标之间的强烈正相关关系就完全暴露出来。最后,为了便利培训的趋同和减少解码复杂性,我们强调必须削减可训练参数的数量,同时强调这些参数的存活地点,这是广泛模拟中合理的。