LiDAR sensors are widely used in autonomous driving due to the reliable 3D spatial information. However, the data of LiDAR is sparse and the frequency of LiDAR is lower than that of cameras. To generate denser point clouds spatially and temporally, we propose the first future pseudo-LiDAR frame prediction network. Given the consecutive sparse depth maps and RGB images, we first predict a future dense depth map based on dynamic motion information coarsely. To eliminate the errors of optical flow estimation, an inter-frame aggregation module is proposed to fuse the warped depth maps with adaptive weights. Then, we refine the predicted dense depth map using static contextual information. The future pseudo-LiDAR frame can be obtained by converting the predicted dense depth map into corresponding 3D point clouds. Experimental results show that our method outperforms the existing solutions on the popular KITTI benchmark.
翻译:由于可靠的 3D 空间信息,LiDAR 传感器被广泛用于自主驱动。然而,LiDAR 的数据稀少,而LiDAR 的频率低于相机。为了在空间和时间上生成更稠密的点云,我们提议未来第一个假的LiDAR框架预测网络。根据连续的稀疏深度地图和RGB 图像,我们首先根据动态运动信息粗略地预测未来密度的深度地图。为消除光学流量估计错误,建议了一个跨框架集成模块,将扭曲的深度地图与适应性重量相结合。然后,我们用静态背景信息改进预测的密度深度地图。未来的伪LiDAR框架可以通过将预测密度深度地图转换为相应的3D点云来获得。实验结果显示,我们的方法比流行的 KITTI 基准的现有解决方案要好。