In this paper, we present an active visual SLAM approach for omnidirectional robots. The goal is to generate control commands that allow such a robot to simultaneously localize itself and map an unknown environment while maximizing the amount of information gained and consuming as low energy as possible. Leveraging the robot's independent translation and rotation control, we introduce a multi-layered approach for active V-SLAM. The top layer decides on informative goal locations and generates highly informative paths to them. The second and third layers actively re-plan and execute the path, exploiting the continuously updated map and local features information. Moreover, we introduce two utility formulations to account for the presence of obstacles in the field of view and the robot's location. Through rigorous simulations, real robot experiments, and comparisons with state-of-the-art methods, we demonstrate that our approach achieves similar coverage results with lesser overall map entropy. This is obtained while keeping the traversed distance up to 39% shorter than the other methods and without increasing the wheels' total rotation amount. Code and implementation details are provided as open-source.
翻译:在本文中, 我们为全向机器人展示了一种积极的视觉 SLAM 方法。 目标是生成控制命令, 让这样的机器人同时定位自己, 绘制未知环境的地图, 同时尽可能增加获得的信息量和消耗的能量。 利用机器人的独立翻译和旋转控制, 我们为活跃的 V- SLAM 引入了多层次的方法。 顶层决定了信息化的目标位置, 并生成了高度信息化路径。 第二层和第三层积极重新规划和执行路径, 利用不断更新的地图和本地特征信息 。 此外, 我们引入了两种工具配方, 以说明在视觉领域和机器人所在地存在障碍。 通过严格的模拟、 真正的机器人实验和与最新方法的比较, 我们证明我们的方法以较小的总体地图加密方法取得了相似的覆盖结果 。 获得这一方法的同时, 将穿越的距离缩短到比其他方法短39%, 并且不增加轮子的总旋转数量 。 代码和实施细节作为开源提供 。