In recent years, visual SLAM has achieved great progress and development, but in complex scenes, especially rotating scenes, the error of mapping will increase significantly, and the slam system is easy to lose track. In this article, we propose an InterpolationSLAM framework, which is a visual SLAM framework based on ORB-SLAM2. InterpolationSLAM is robust in rotating scenes for Monocular and RGB-D configurations. By detecting the rotation and performing interpolation processing at the rotated position, pose of the system can be estimated more accurately at the rotated position, thereby improving the accuracy and robustness of the SLAM system in the rotating scenes. To the best of our knowledge, it is the first work combining the interpolation network into a Visual SLAM system to improve SLAM system robustness in rotating scenes. We conduct experiments both on KITTI Monocular and TUM RGB-D datasets. The results demonstrate that InterpolationSLAM outperforms the accuracy of standard Visual SLAM baselines.
翻译:近年来,视觉SLM取得了巨大的进步和发展,但在复杂的场景中,特别是在旋转场景中,绘图错误将大大增加,而平坦系统很容易失去轨道。在本条中,我们提议建立一个国际刑警SLAM框架,这是一个以ORB-SLAM2为基础的视觉SLAM框架。国际刑警SLAM在单向和RGB-D配置的旋转场景中非常活跃。通过在旋转位置上探测轮调和进行内插处理,可以更准确地估计在旋转位置上构成这个系统,从而提高SLAM系统在旋转场中的准确性和稳健性。据我们所知,这是将内插网络并入视觉SLAM系统的第一项工作,目的是提高SLAM在旋转场上的系统稳健性。我们在KITTI-Monocle和TUM RGB-D数据集上进行了实验。结果显示,国际刑警SLAM在标准视觉SLM基线上的准确性。