While dense visual SLAM methods are capable of estimating dense reconstructions of the environment, they suffer from a lack of robustness in their tracking step, especially when the optimisation is poorly initialised. Sparse visual SLAM systems have attained high levels of accuracy and robustness through the inclusion of inertial measurements in a tightly-coupled fusion. Inspired by this performance, we propose the first tightly-coupled dense RGB-D-inertial SLAM system. Our system has real-time capability while running on a GPU. It jointly optimises for the camera pose, velocity, IMU biases and gravity direction while building up a globally consistent, fully dense surfel-based 3D reconstruction of the environment. Through a series of experiments on both synthetic and real world datasets, we show that our dense visual-inertial SLAM system is more robust to fast motions and periods of low texture and low geometric variation than a related RGB-D-only SLAM system.
翻译:虽然密集的视觉 SLM 方法能够估计环境的密集重建,但它们在跟踪步骤方面缺乏稳健性,特别是在最优化程度不高的情况下。 浅显的视觉SLM 系统通过将惯性测量纳入紧密结合的聚合体而达到了高度的准确性和稳健性。 受此表现的启发,我们提出了第一个紧密相联的密集的 RGB-D 内皮型SLM 系统。 我们的系统在运行于GPU时具有实时能力。 它在建立全球一致、完全密集的3D 型表面环境重建的同时,对相机的显示、速度、IMU偏差和重力方向的共同选择, 并且建立全球一致、完全密集的3D型环境重建。 我们通过对合成和真实世界数据集的一系列实验,表明我们的密集视觉-内皮型SLM 系统比一个相关的 RGB-D 专用的SLM 系统更强大, 更能快速地适应低纹和低时段和低几何变。