This paper introduces an incremental training framework for compressing popular Deep Neural Network (DNN) based unfolded multiple-input-multiple-output (MIMO) detection algorithms like DetNet. The idea of incremental training is explored to select the optimal depth while training. To reduce the computation requirements or the number of FLoating point OPerations (FLOPs) and enforce sparsity in weights, the concept of structured regularization is explored using group LASSO and sparse group LASSO. Our methods lead to an astounding $98.9\%$ reduction in memory requirement and $81.63\%$ reduction in FLOPs when compared with DetNet without compromising on BER performance.
翻译:本文介绍了压缩流行的深神经网络(DNN)的递增培训框架,这种网络以DetNet等多投入多输出检测算法为基础,展开多种投入多产出(MIMO)的演化。探讨递增培训的想法是为了在培训期间选择最佳深度。为了减少计算要求或脱线点定位的数量,并强制实施重量的宽度,利用LASO集团和分散的LASSO集团探讨结构正规化概念。我们的方法导致记忆要求减少98.9美元,与DetNet相比,FLOP减少81.63美元,但不影响BER的性能。