High-definition (HD) semantic map generation of the environment is an essential component of autonomous driving. Existing methods have achieved good performance in this task by fusing different sensor modalities, such as LiDAR and camera. However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications. In this paper, we focus on the task of building the HD maps in both short ranges, i.e., within 30 m, and also predicting long-range HD maps up to 90 m, which is required by downstream path planning and control tasks to improve the smoothness and safety of autonomous driving. To this end, we propose a novel network named SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels. We use LiDAR depth to improve image depth estimation and use image features to guide long-range LiDAR feature prediction. We benchmark our SuperFusion on the nuScenes dataset and a self-recorded dataset and show that it outperforms the state-of-the-art baseline methods with large margins on all intervals. Additionally, we apply the generated HD map to a downstream path planning task, demonstrating that the long-range HD maps predicted by our method can lead to better path planning for autonomous vehicles. Our code and self-recorded dataset will be available at https://github.com/haomo-ai/SuperFusion.
翻译:高清语义地图的生成是自动驾驶的关键组成部分。通过融合不同的传感器模态,如光达和相机,现有方法已经取得了良好的性能。然而,目前的工作都是基于原始数据或网络特征级别的融合,并仅考虑了短距离内的高清地图生成,限制了它们在现实自动驾驶应用中的使用。本文关注于在短距离内,即30m内构建高清地图,同时也预测高达90m的长距离高清地图,这是后续路径规划和控制任务所需的,以提高自动驾驶的平滑性和安全性。为此,我们提出了一种名为SuperFusion的新型网络,利用多层面的LiDAR和相机数据融合。我们使用LiDAR深度来提高图像深度估计的准确性,利用图像特征来引导长距离LiDAR特征预测。我们在nuScenes数据集和自己的数据集上对SuperFusion进行了基准测试,并显示它在所有区间上均比最先进的基准方法取得了更好的性能。此外,我们将生成的高清地图应用于下游路径规划任务中,证明我们的方法预测的长距离高清地图可以用于更好的自动驾驶路径规划。我们的代码和自己录制的数据集将在 https://github.com/haomo-ai/SuperFusion 上公开发布。