We present ONCE-3DLanes, a real-world autonomous driving dataset with lane layout annotation in 3D space. Conventional 2D lane detection from a monocular image yields poor performance of following planning and control tasks in autonomous driving due to the case of uneven road. Predicting the 3D lane layout is thus necessary and enables effective and safe driving. However, existing 3D lane detection datasets are either unpublished or synthesized from a simulated environment, severely hampering the development of this field. In this paper, we take steps towards addressing these issues. By exploiting the explicit relationship between point clouds and image pixels, a dataset annotation pipeline is designed to automatically generate high-quality 3D lane locations from 2D lane annotations in 211K road scenes. In addition, we present an extrinsic-free, anchor-free method, called SALAD, regressing the 3D coordinates of lanes in image view without converting the feature map into the bird's-eye view (BEV). To facilitate future research on 3D lane detection, we benchmark the dataset and provide a novel evaluation metric, performing extensive experiments of both existing approaches and our proposed method. The aim of our work is to revive the interest of 3D lane detection in a real-world scenario. We believe our work can lead to the expected and unexpected innovations in both academia and industry.
翻译:我们在3D空间上展示了现实世界自主的驾驶数据集OCCE-3DLanes,这是一部带有3D空间版图说明的全天候自动驾驶数据集。由于道路不均,从单视图像中探测常规 2D 车道显示在自主驾驶中执行规划和控制任务方面表现不佳,因此,预测3D车道布局是必要的,能够有效和安全驾驶。然而,现有的3D车道探测数据集不是未公布就是从模拟环境中合成的,严重妨碍这一领域的发展。在本文件中,我们采取步骤解决这些问题。通过利用点云与图像像素之间的明确关系,设计了一个数据集标记管道,目的是从211K路景的2D车道说明中自动生成高质量的3D车道位置。此外,我们展示了一种封闭的、无锚的、没有固定的、能够有效和安全驾驶的车道布局方法,在不将地标地图转换为鸟类眼观(BEV)的情况下,为便利今后对3D车道探测工作进行研究,我们为数据集和提供新的评价指标,进行广泛的三D车道定位实验,我们既相信了我们目前的工作,也相信了我们的航道,又恢复了我们目前的工作,也相信了我们的工作,也相信了我们的航道的预期。