Monocular camera sensors are vital to intelligent vehicle operation and automated driving assistance and are also heavily employed in traffic control infrastructure. Calibrating the monocular camera, though, is time-consuming and often requires significant manual intervention. In this work, we present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information from images and point clouds. Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle with high-precision localization to capture a point cloud of the camera environment. Afterward, a mapping between the camera and world coordinate spaces is obtained by performing a lidar-to-camera registration of the semantically segmented sensor data. We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results. Our approach is suitable for infrastructure sensors as well as vehicle sensors, while it does not require motion of the camera platform.
翻译:单镜头传感器对于智能车辆操作和自动驾驶协助至关重要,而且大量用于交通控制基础设施。虽然校准单镜头非常耗时,往往需要大量的人工干预。在这项工作中,我们提出了一个外部相机校准方法,利用图像和点云的语义分解信息对参数估计进行自动化。我们的方法依赖于对相机表面的粗略初步测量,并依靠安装在具有高精度定位的车辆上的利达尔传感器,以捕捉摄像学环境的点云。随后,通过对语义断段传感器数据进行利达尔到世界协调空间的登记,在相机和世界协调空间之间绘制地图。我们评估模拟和现实世界数据的方法,以显示校准结果中的低误差测量。我们的方法适用于基础设施传感器和车辆传感器,但不需要移动相机平台。