The fusion of multi-modal sensors has become increasingly popular in autonomous driving and intelligent robots since it can provide richer information than any single sensor, enhance reliability in complex environments. Multi-sensor extrinsic calibration is one of the key factors of sensor fusion. However, such calibration is difficult due to the variety of sensor modalities and the requirement of calibration targets and human labor. In this paper, we demonstrate a new targetless cross-modal calibration framework by focusing on the extrinsic transformations among stereo cameras, thermal cameras, and laser sensors. Specifically, the calibration between stereo and laser is conducted in 3D space by minimizing the registration error, while the thermal extrinsic to the other two sensors is estimated by optimizing the alignment of the edge features. Our method requires no dedicated targets and performs the multi-sensor calibration in a single shot without human interaction. Experimental results show that the calibration framework is accurate and applicable in general scenes.
翻译:多式传感器的融合在自主驱动和智能机器人中越来越受欢迎,因为它能够提供比任何单一传感器更丰富的信息,提高复杂环境中的可靠性。多传感器外部校准是传感器融合的关键因素之一。然而,由于传感器模式的多样性以及校准目标与人劳动的要求,这种校准很困难。在本文中,我们展示了一个新的无目标的跨式校准框架,侧重于立体摄像机、热相机和激光传感器之间的外部变异。具体地说,立体和激光的校准是在3D空间进行的,尽量减少注册错误,而与其他两个传感器的热外部校准则通过优化边缘特征的校准来估计。我们的方法不需要专门的目标,而是在没有人类互动的情况下单镜头进行多传感器校准。实验结果表明,校准框架是准确的,适用于一般场景。