In this paper, we propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, which consists of deep neural network (DNN)-aided pilot training, channel feedback, and hybrid analog-digital (HAD) precoding. Specifically, we develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter. To reduce the signaling overhead and channel state information (CSI) mismatch caused by the transmission delay, a two-timescale DNN composed of a long-term DNN and a short-term DNN is developed. The analog precoders are designed by the long-term DNN based on the CSI statistics and updated once in a frame consisting of a number of time slots. In contrast, the digital precoders are optimized by the short-term DNN at each time slot based on the estimated low-dimensional equivalent CSI matrices. A two-timescale training method is also developed for the proposed DNN with a binary layer. We then analyze the generalization ability and signaling overhead for the proposed DNN based algorithm. Simulation results show that our proposed technique significantly outperforms conventional schemes in terms of bit-error rate performance with reduced signaling overhead and shorter pilot sequences.
翻译:在本文中,我们建议为毫米波(mmWave)大规模多投入多输出(MIMO)系统设计一个端到端深的基于深学习的联合收发器设计算法,该算法由深神经网络(DNN)辅助试点培训、频道反馈和混合模拟数字(HAD)预码组成。具体地说,我们开发了一个DNN(DNN)架构,将接收的试点映射成接收机的反馈点,然后将反馈点再映入发机的混合预码中。为了减少传输延迟导致的间接和频道信息错配信号,将开发一个由长期 DNN(DN) 和短期 DNN(M) 支持的高级多输出(M) 系统。模拟预译码由长期的DNNN(D) 根据CSI 统计数据设计,并在由多个时间档构成的框架中更新一次。相比之下,数字预译码由每个时段的短期 DNNN(D) 优化,基于估计的低度 CSI(CSI) 基) 基数的低维(CN(C) ) 基调) 双时段(DM(DNNNE) 预算) 模拟(SBI) 模拟(SL) 模拟(SL) 模拟) 和拟议常规(S(SIM) (SL) 模拟) 模拟) (SL) (SL) (SBI) (S) (S) (S) (SLI) (SL) (S) (S) (S) (SLODVDVDBL) (S) (S) (S) (S) (T) (S) (S) (S) (S) (S) (S) (S) (S) (T) (S) (S) (S) (S) (S) (S) (T) (T) (T) (T) (T) (T) (T) (SL) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (T) (