3D perception using sensors under vehicle industrial standard is the rigid demand in autonomous driving. MEMS LiDAR emerges with irresistible trend due to its lower cost, more robust, and meeting the mass-production standards. However, it suffers small field of view (FoV), slowing down the step of its population. In this paper, we propose LEAD, i.e., LiDAR Extender for Autonomous Driving, to extend the MEMS LiDAR by coupled image w.r.t both FoV and range. We propose a multi-stage propagation strategy based on depth distributions and uncertainty map, which shows effective propagation ability. Moreover, our depth outpainting/propagation network follows a teacher-student training fashion, which transfers depth estimation ability to depth completion network without any scale error passed. To validate the LiDAR extension quality, we utilize a high-precise laser scanner to generate a ground-truth dataset. Quantitative and qualitative evaluations show that our scheme outperforms SOTAs with a large margin. We believe the proposed LEAD along with the dataset would benefit the community w.r.t depth researches.
翻译:根据车辆工业标准使用传感器的3D感知是自动驾驶的僵硬需求。MEMS LiDAR由于成本较低、更加稳健和符合质量生产标准而出现了不可抗拒的趋势。然而,它却受到很小的视野(FoV)的影响,减缓了人口的步调。在本文中,我们提议LEAD,即自动驾驶的LIDAR扩展器,通过结合图像 w.r.t FoV和射程来扩展MEMS LiDAR。我们提出了一个基于深度分布和不确定性地图的多阶段传播战略,显示有效的传播能力。此外,我们的深度外延/推进网络遵循教师-学生培训方式,将深度估计能力转移到深度完成网络,而不会发生任何规模错误。为了验证LIDAR扩展质量,我们使用高精密的激光扫描器来生成地面图谱数据集。定量和定性评估显示,我们的计划将大大超出SATA的深度。我们认为,拟议的LEAD与数据集一起将有益于社区(w.r.t)。