Integration of multiple sensor modalities and deep learning into Simultaneous Localization And Mapping (SLAM) systems are areas of significant interest in current research. Multi-modality is a stepping stone towards achieving robustness in challenging environments and interoperability of heterogeneous multi-robot systems with varying sensor setups. With maplab 2.0, we provide a versatile open-source platform that facilitates developing, testing, and integrating new modules and features into a fully-fledged SLAM system. Through extensive experiments, we show that maplab 2.0's accuracy is comparable to the state-of-the-art on the HILTI 2021 benchmark. Additionally, we showcase the flexibility of our system with three use cases: i) large-scale (approx. 10 km) multi-robot multi-session (23 missions) mapping, ii) integration of non-visual landmarks, and iii) incorporating a semantic object-based loop closure module into the mapping framework. The code is available open-source at https://github.com/ethz-asl/maplab.
翻译:将多种传感器模式和深层次学习纳入同步本地化和绘图系统(SLAM)是当前研究中非常感兴趣的领域。多模式是朝着在具有不同传感器设置的多式机器人系统具有挑战性的环境中实现稳健性和兼容性迈出的一步。有了地图lab 2.0,我们提供了一个多功能的开放源平台,便利开发、测试和将新的模块和特征纳入成熟的SLAM系统。通过广泛的实验,我们显示地图lab 2.0的准确性与HILTI 2021基准的最新技术相当。此外,我们展示了我们系统的灵活性,有三个使用案例:一)大规模(约10公里)多色博特多层(23个任务)绘图,二)非视觉地标集集,三)将一个基于语言的物体环封闭模块纳入绘图框架。该代码可在https://github.com/ethz-asl/maplab上查阅。