We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative Closest Point) to fine-tune the frame-to-frame relative pose and fuse the depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Moreover, our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.
翻译:我们建议采用6D RGB-D odography 方法,在连续的 RGB-D 基点提取和RGB 和深度图像平面上相匹配的特征下,发现相相对相机在连续的 RGB-D 框架之间构成。 此外,我们向高度精确的KinectFusion 算法提供估计的面貌,该算法使用快速的CPI(动态近地点)来微调框架对框架的相对面,并将深度数据整合到全球隐含的表面。我们用Sturm 等人提供的公开的 RGB-D SLAM 基准数据集评估了我们的方法。 实验结果显示,我们拟议的重建方法完全基于视觉odography 和 KinectFusion, 超越了最先进的 RGB-D SLAM 系统准确性。 此外,我们的算法输出出一个随时可用的多边形网(非常适合创建 3D 虚拟世界), 而没有任何后处理步骤。