Large-scale colored point clouds have many advantages in navigation or scene display. Relying on cameras and LiDARs, which are now widely used in reconstruction tasks, it is possible to obtain such colored point clouds. However, the information from these two kinds of sensors is not well fused in many existing frameworks, resulting in poor colorization results, thus resulting in inaccurate camera poses and damaged point colorization results. We propose a novel framework called Camera Pose Augmentation (CP+) to improve the camera poses and align them directly with the LiDAR-based point cloud. Initial coarse camera poses are given by LiDAR-Inertial or LiDAR-Inertial-Visual Odometry with approximate extrinsic parameters and time synchronization. The key steps to improve the alignment of the images consist of selecting a point cloud corresponding to a region of interest in each camera view, extracting reliable edge features from this point cloud, and deriving 2D-3D line correspondences which are used towards iterative minimization of the re-projection error.
翻译:大型彩色点云在导航或显示场景方面有许多优势。 依靠照相机和LiDARs(它们现在被广泛用于重建任务), 有可能获得这种彩色点云。 但是,这两种传感器的信息在许多现有框架中没有很好地结合, 导致色彩化结果差, 从而导致摄像头不精确地配置, 点色化结果受损。 我们提议了一个名为Camer Pose Agenciation( CP+) 的新框架, 来改进相机的表面, 并直接与基于LiDAR的点云相匹配。 最初的粗皮相机由LiDAR- Instertial或LiDAR- Intertial- Visual Odography 提供, 其外观参数和时间同步性大致相同。 改进图像一致性的关键步骤是选择一个与每个摄像器视图中感兴趣的区域相对应的点云, 从此点云中提取可靠的边缘特征, 并产生2D-3D线通信, 用于将再投影出错误的迭式最小化。