Beam selection for millimeter-wave links in a vehicular scenario is a challenging problem, as an exhaustive search among all candidate beam pairs cannot be assuredly completed within short contact times. We solve this problem via a novel expediting beam selection by leveraging multimodal data collected from sensors like LiDAR, camera images, and GPS. We propose individual modality and distributed fusion-based deep learning (F-DL) architectures that can execute locally as well as at a mobile edge computing center (MEC), with a study on associated tradeoffs. We also formulate and solve an optimization problem that considers practical beam-searching, MEC processing and sensor-to-MEC data delivery latency overheads for determining the output dimensions of the above F-DL architectures. Results from extensive evaluations conducted on publicly available synthetic and home-grown real-world datasets reveal 95% and 96% improvement in beam selection speed over classical RF-only beam sweeping, respectively. F-DL also outperforms the state-of-the-art techniques by 20-22% in predicting top-10 best beam pairs.
翻译:通过利用从LIDAR等传感器、照相机图像和全球定位系统等传感器收集的多式联运数据,我们通过使用新颖的加速光束选择,解决这一问题。我们提出单个模式,并分发基于集成的深层学习(F-DL)结构,既可在本地执行,也可在移动边缘计算中心(MEC)执行,同时进行相关权衡研究。我们还制定和解决一个优化问题,即考虑实际的光束搜索、MEC处理和传感器到MEC数据传送延迟管理,以确定上述F-DL结构的产出层面。对公开的合成和本土造地真实世界数据集进行的广泛评价的结果显示,光谱光谱光谱光谱扫描速度分别提高了95%和96%。F-DL还制定和解决了最优化问题,在预测顶层-10个最佳光谱组合中,20-22%的光谱技术比最新技术高出了20%-22%。