As a fundamental task for intelligent robots, visual SLAM has made great progress over the past decades. However, robust SLAM under highly weak-textured environments still remains very challenging. In this paper, we propose a novel visual SLAM system named RWT-SLAM to tackle this problem. We modify LoFTR network which is able to produce dense point matching under low-textured scenes to generate feature descriptors. To integrate the new features into the popular ORB-SLAM framework, we develop feature masks to filter out the unreliable features and employ KNN strategy to strengthen the matching robustness. We also retrained visual vocabulary upon new descriptors for efficient loop closing. The resulting RWT-SLAM is tested in various public datasets such as TUM and OpenLORIS, as well as our own data. The results shows very promising performance under highly weak-textured environments.
翻译:作为智能机器人的一项基本任务,视觉 SLAM在过去几十年中取得了巨大进步。然而,在高度薄弱的拖网环境下,强健的SLAM仍然非常困难。在本文中,我们提议建立一个名为RWT-SLAM的新型视觉SLAM系统来解决这个问题。我们修改了LoFTR网络,这个网络能够在低脂的场景下产生密度匹配点,以生成特征描述符。为了将这些新特征纳入广受欢迎的ORB-SLAM框架,我们开发了功能面具,以过滤不可靠的特征,并运用 KNNN 战略加强匹配的强健性。我们还重新培训了用于新描述器的视觉词汇,以便高效循环关闭。由此产生的RWT-SLAM在诸如TUM和OpenLORIS等各种公共数据集以及我们自己的数据中进行了测试。结果显示,在非常薄弱的拖网环境下,效果非常有希望。