We propose a deep learning-based channel estimation, quantization, feedback, and precoding method for downlink multiuser multiple-input and multiple-output systems. In the proposed system, channel estimation and quantization for limited feedback are handled by a receiver deep neural network (DNN). Precoder selection is handled by a transmitter DNN. To emulate the traditional channel quantization, a binarization layer is adopted at each receiver DNN, and the binarization layer is also used to enable end-to-end learning. However, this can lead to inaccurate gradients, which can trap the receiver DNNs at a poor local minimum during training. To address this, we consider knowledge distillation, in which the existing DNNs are jointly trained with an auxiliary transmitter DNN. The use of an auxiliary DNN as a teacher network allows the receiver DNNs to additionally exploit lossless gradients, which is useful in avoiding a poor local minimum. For the same number of feedback bits, our DNN-based precoding scheme can achieve a higher downlink rate compared to conventional linear precoding with codebook-based limited feedback.
翻译:我们为多用户多输入和多输出系统提出了一个深层次的基于学习的频道估计、量化、反馈和预编码方法。 在提议的系统中,对有限反馈的频道估计和量化由一个接收器深神经网络(DNN)处理。 预译器的选择由一个发报机 DNN 处理。 为了效仿传统的频道量化,每个接收器 DNN 都采用二进制层,二进制层也用来进行端到端的学习。 但是,这可能导致不准确的梯度, 从而在培训期间将接收器的 DNN 设于一个差的本地最低水平。 为了解决这个问题,我们考虑知识蒸馏, 现有的 DNNN 由辅助发报机共同培训。 使用辅助 DNNN 作为教师网络, 使接收器能够进一步利用无损失的梯度, 这有助于避免本地最低程度的差。 对于同样数目的反馈点, 我们基于 DNN 的预编程计划可以实现更高的下端连接率, 与基于代码的常规直线前的有限反馈相比, 。