The problem of maximum likelihood decoding with a neural decoder for error-correcting code is considered. It is shown that the neural decoder can be improved with two novel loss terms on the node's activations. The first loss term imposes a sparse constraint on the node's activations. Whereas, the second loss term tried to mimic the node's activations from a teacher decoder which has better performance. The proposed method has the same run time complexity and model size as the neural Belief Propagation decoder, while improving the decoding performance by up to $1.1dB$ on BCH codes.
翻译:考虑了用神经解码器解码错误校正代码的最大可能性问题。 显示神经解码器可以通过节点激活的两个新的损失术语加以改进。 第一个损失术语对节点的激活造成细小的限制。 而第二个损失术语试图模拟节点的激活, 教师解码器的性能更好。 拟议的方法与神经信仰推进解码器具有相同的时间复杂性和模型大小,同时将BCH代码的解码性能提高至1.1dB$。