The advancement of deep learning has led to the development of neural decoders for low latency communications. However, neural decoders can be very complex which can lead to increased computation and latency. We consider iterative pruning approaches (such as the lottery ticket hypothesis algorithm) to prune weights in neural decoders. Decoders with fewer number of weights can have lower latency and lower complexity while retaining the accuracy of the original model. This will make neural decoders more suitable for mobile and other edge devices with limited computational power. We also propose semi-soft decision decoding for neural decoders which can be used to improve the bit error rate performance of the pruned network.
翻译:深层学习的进步导致了低长期通信神经解码器的发展,然而,神经解码器可能非常复杂,可能会增加计算和延迟。我们考虑对神经解码器中的纯重进行迭代修剪方法(如彩票假设算法),重量较少的脱钩器可以具有较低的潜线和较低的复杂性,同时保留原始模型的准确性。这将使神经解码器更适合计算能力有限的移动装置和其他边缘装置。我们还提议对神经解码器进行半软的解码处理,用于改进经调整的网络的微误率性能。