This paper presents a novel 3D mapping robot with an omnidirectional field-of-view (FoV) sensor suite composed of a non-repetitive LiDAR and an omnidirectional camera. Thanks to the non-repetitive scanning nature of the LiDAR, an automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters for the omnidirectional camera and the extrinsic parameters for the camera and LiDAR, which is crucial for the required step in bringing color and texture information to the point clouds in surveying and mapping tasks. Comparisons and analyses are made to target-based intrinsic calibration and mutual information (MI)-based extrinsic calibration, respectively. With this co-calibrated sensor suite, the hybrid mapping robot integrates both the odometry-based mapping mode and stationary mapping mode. Meanwhile, we proposed a new workflow to achieve coarse-to-fine mapping, including efficient and coarse mapping in a global environment with odometry-based mapping mode; planning for viewpoints in the region-of-interest (ROI) based on the coarse map (relies on the previous work); navigating to each viewpoint and performing finer and more precise stationary scanning and mapping of the ROI. The fine map is stitched with the global coarse map, which provides a more efficient and precise result than the conventional stationary approaches and the emerging odometry-based approaches, respectively.
翻译:本文展示了一个新的 3D 映像机器人, 配有全向场视野( FoV) 传感器套件, 由非重复的LIDAR 和全向摄像头组成。 由于LIDAR 的非重复扫描性质, 提议采用自动的无目标共校校准方法, 以同时校准全向摄像头的内在参数和照相机的外部参数, 以及LIDAR 的外部参数。 这对于采取必要步骤, 将颜色和纹理信息带到勘测和绘图任务中的点云至关重要。 比较和分析将分别用于基于目标的内在校准和相互信息( MI) 的外部校准。 有了这个共同校准的传感器套件, 混合绘图机器人将基于 Odos 的映像模式和基于摄像头的外向参数结合起来。 与此同时, 我们提出了一个新的工作流程, 以实现基于基于基于 odology 映射模式的全球环境中高效和粗俗的绘图方法; 规划基于区域和正轨的精确的图像, 进行更精确的精确的排序和不断的校正的校正- 。