Classical Visual Simultaneous Localization and Mapping (VSLAM) algorithms can be easily induced to fail when either the robot's motion or the environment is too challenging. The use of Deep Neural Networks to enhance VSLAM algorithms has recently achieved promising results, which we call hybrid methods. In this paper, we compare the performance of hybrid monocular VSLAM methods with different learned feature descriptors. To this end, we propose a set of experiments to evaluate the robustness of the algorithms under different environments, camera motion, and camera sensor noise. Experiments conducted on KITTI and Euroc MAV datasets confirm that learned feature descriptors can create more robust VSLAM systems.
翻译:经典视觉同步本地化和映射算法(VSLAM)在机器人的运动或环境都过于具有挑战性时很容易被诱使失败。 利用深神经网络加强 VSLAM 算法最近取得了大有希望的结果, 我们称之为混合方法。 在本文中, 我们比较了混合单眼VSLAM 方法的性能, 以及不同的已知特征描述器。 为此, 我们提出了一系列实验, 以评价不同环境中的算法、 相机动作和相机感应器噪音的可靠性。 在 KITTI 和欧洲MAV 数据集上进行的实验证实, 学到的特征描述器可以创建更强大的 VSLAM 系统 。