In this paper, we propose an encoder-decoder neural architecture (called Channelformer) to achieve improved channel estimation for orthogonal frequency-division multiplexing (OFDM) waveforms in downlink scenarios. The self-attention mechanism is employed to achieve input precoding for the input features before processing them in the decoder. In particular, we implement multi-head attention in the encoder and a residual convolutional neural architecture as the decoder, respectively. We also employ a customized weight-level pruning to slim the trained neural network with a fine-tuning process, which reduces the computational complexity significantly to realize a low complexity and low latency solution. This enables reductions of up to 70\% in the parameters, while maintaining an almost identical performance compared with the complete Channelformer. We also propose an effective online training method based on the fifth generation (5G) new radio (NR) configuration for the modern communication systems, which only needs the available information at the receiver for online training. Using industrial standard channel models, the simulations of attention-based solutions show superior estimation performance compared with other candidate neural network methods for channel estimation.
翻译:在本文中,我们建议建立一个编码器-decoder神经结构(称为Channelex),以在下链接情景中改进对正心频率分多路波形(OFDM)的频道估计。使用自我注意机制是为了在解码器中处理输入特性之前实现输入编码预编码。特别是,我们在编码器和残余神经神经结构中分别采用多头关注作为解码器。我们还采用定制的重量级剪裁,将经过训练的神经网络压缩成微调过程,大大降低计算复杂性,以实现低复杂度和低延缓度的解决方案。这可以使计算参数减少多达70 ⁇,同时保持与整个解码器几乎相同的性能。我们还根据第五代(5G)新的无线电(NR)配置为现代通信系统提出了一个有效的在线培训方法,只需要接受者可用的信息进行在线培训。使用工业标准频道模型,对关注解决方案的模拟显示与其他候选神经网络的频道估算方法相比,更高的估计性能。