Most sensor setups for onboard autonomous perception are composed of LiDARs and vision systems, as they provide complementary information that improves the reliability of the different algorithms necessary to obtain a robust scene understanding. However, the effective use of information from different sources requires an accurate calibration between the sensors involved, which usually implies a tedious and burdensome process. We present a method to calibrate the extrinsic parameters of any pair of sensors involving LiDARs, monocular or stereo cameras, of the same or different modalities. The procedure is composed of two stages: first, reference points belonging to a custom calibration target are extracted from the data provided by the sensors to be calibrated, and second, the optimal rigid transformation is found through the registration of both point sets. The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups. In order to assess the performance of the proposed method, a novel evaluation suite built on top of a popular simulation framework is introduced. Experiments on the synthetic environment show that our calibration algorithm significantly outperforms existing methods, whereas real data tests corroborate the results obtained in the evaluation suite. Open-source code is available at https://github.com/beltransen/velo2cam_calibration
翻译:机上自主感知的大多数传感器设置都由LiDARs和视觉系统组成,因为它们提供补充信息,提高获得稳健的场景理解所需的不同算法的可靠性;然而,有效使用不同来源的信息需要准确校准所涉传感器,这通常意味着一个烦琐和繁琐的过程;我们提出了一个方法,校准任何涉及LIDARs、单望远镜或立体摄像机的传感器的外表参数,这些传感器的外表参数是相同或不同的方式;该程序由两个阶段组成:首先,从需要校准的传感器提供的数据中提取属于定制校准目标的参考点;其次,通过登记两套点组合找到最佳的硬质转换;拟议方法可以处理分辨率非常不同的装置,并像通常在车辆设置中发现的那样,配置非常繁琐的装置;为了评估拟议方法的性能,在大众模拟框架的顶部推出一个新的评价套件。对合成环境的实验显示,我们的校准算法大大优于现有方法,而实际数据测试则通过登记在评价套房中取得的结果。