For autonomous vehicles, an accurate calibration for LiDAR and camera is a prerequisite for multi-sensor perception systems. However, existing calibration techniques require either a complicated setting with various calibration targets, or an initial calibration provided beforehand, which greatly impedes their applicability in large-scale autonomous vehicle deployment. To tackle these issues, we propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes. Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud and formulates the initial calibration of extrinsic parameters as a perspective-3-lines (P3L) problem. Subsequently, a cost function defined under the semantic constraints of the line features is designed to perform refinement on the solved coarse calibration. The whole procedure is fully automatic and user-friendly without the need to adjust environment settings or provide an initial calibration. We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
翻译:对于自主车辆而言,对LIDAR和相机进行精确校准是多传感器感知系统的一个先决条件,但是,现有的校准技术需要复杂的设置,有各种校准目标,或者事先提供初步校准,这大大妨碍了其在大规模自主车辆部署中的适用性。为了解决这些问题,我们建议了一种新颖的方法,校准LIDAR的外表参数和路景的照相机。我们的方法从静态直线形状物体中引入线性特征,如图象和点云中的路道和杆,并将外部参数的初步校准作为视线-3线(P3L)的问题。随后,在线特征的语义限制下界定的成本功能旨在改进已解决的粗体校准。整个程序是完全自动和方便用户的,不需要调整环境环境环境环境或提供初步校准。我们对KITTI和我们内部数据集进行了广泛的实验,定量和定性结果显示了我们方法的稳健性和准确性。