Enabling highly-mobile millimeter wave (mmWave) and terahertz (THz) wireless communication applications requires overcoming the critical challenges associated with the large antenna arrays deployed at these systems. In particular, adjusting the narrow beams of these antenna arrays typically incurs high beam training overhead that scales with the number of antennas. To address these challenges, this paper proposes a multi-modal machine learning based approach that leverages positional and visual (camera) data collected from the wireless communication environment for fast beam prediction. The developed framework has been tested on a real-world vehicular dataset comprising practical GPS, camera, and mmWave beam training data. The results show the proposed approach achieves more than $\approx$ 75\% top-1 beam prediction accuracy and close to 100\% top-3 beam prediction accuracy in realistic communication scenarios.
翻译:启用高移动毫米波(mmWave)和Thahertz(Thz)无线通信应用程序需要克服与这些系统部署的大型天线阵列有关的重大挑战,特别是调整这些天线阵列的窄射束通常会产生高波束训练,其规模以天线数量为尺度。为了应对这些挑战,本文件建议采用基于多模式的机器学习方法,利用从无线通信环境收集的定位和视觉(摄像)数据进行快速波波预测。开发的框架已在现实的全球定位系统、照相机和毫米Wave光束训练数据组成的真实世界车辆数据集上进行了测试。结果显示,拟议方法在现实通信情景中取得了超过$/approx$75 ⁇ 顶一波束预测精度和接近100 ⁇ 顶3波束预测精度。