It was known a standard min-sum decoder can be unrolled as a neural network after weighting each edges. We adopt the similar decoding framework to seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes in short and moderate block lengths. It is elaborated on how to generate high-quality training data effectively, and the strong link is illustrated between training loss and the bit error rate of a neural decoder after tracing the evolution curves. Considering there exists a potential conflict between the neural networks and the error-correction decoders in terms of their objectives, the necessity of restraining the number of trainable parameters to ensure training convergence or reduce decoding complexity is highlighted. Consequently, for the referred LDPC codes, their rigorous algebraic structure promotes the feasibility of cutting down the number of trainable parameters even to only one, whereas incurring marginal performance loss in the simulation.
翻译:它被称为标准微和解码器,在对每个边缘加权后可以作为一个神经网络脱钩。我们采用了类似的解码框架,以寻求低复杂度和高性能解码器,在短小和中小区段长度的有限几何LDPC代码中寻找低复杂度和高性能解码器。它阐述了如何有效生成高质量的培训数据,并说明了在跟踪进化曲线后培训损失与神经解码器的微误率之间的紧密联系。考虑到神经网络与错误校正解码器在目标方面存在潜在冲突,因此强调了限制可培训参数数量的必要性,以确保培训趋同或降低解码复杂性。因此,对于所提到的LDPC代码,严格的代数结构促进了将可培训参数数量削减到仅一个,而模拟中则造成边际性能损失的可行性。