The combination of LiDARs and cameras enables a mobile robot to perceive environments with multi-modal data, becoming a key factor in achieving robust perception. Traditional frame cameras are sensitive to changing illumination conditions, motivating us to introduce novel event cameras to make LiDAR-camera fusion more complete and robust. However, to jointly exploit these sensors, the challenging extrinsic calibration problem should be addressed. This paper proposes an automatic checkerboard-based approach to calibrate extrinsics between a LiDAR and a frame/event camera, where four contributions are presented. Firstly, we present an automatic feature extraction and checkerboard tracking method from LiDAR's point clouds. Secondly, we reconstruct realistic frame images from event streams, applying traditional corner detectors to event cameras. Thirdly, we propose an initialization-refinement procedure to estimate extrinsics using point-to-plane and point-to-line constraints in a coarse-to-fine manner. Fourthly, we introduce a unified and globally optimal solution to address two optimization problems in calibration. Our approach has been validated with extensive experiments on 19 simulated and real-world datasets and outperforms the state-of-the-art.
翻译:激光雷达和相机的组合使移动机器人能够使用多模态数据感知环境,成为实现强健感知的关键因素。传统的帧相机对光照条件的变化敏感,这促使我们引入新型事件相机以使激光雷达-相机融合更加完整和强健。然而,要共同利用这些传感器,需要解决具有挑战性的外参校准问题。本文提出了一种基于棋盘的自动化方法来校准激光雷达和帧/事件相机之间的外参。本文提出了四个贡献。首先,我们从激光雷达的点云中提出了自动特征提取和棋盘跟踪方法。其次,我们从事件流中重建出真实的帧图像,应用传统的角点检测器来处理事件相机。第三,我们提出了一种初始化-细化过程来粗略地从点到平面和点到线的约束中估计外参。第四,我们介绍了一种统一的、全局最优的解决方案来解决校准中的两个优化问题。我们的方法已经在19个模拟和实际数据集上进行了广泛的实验验证,并且胜过了现有的最新技术。