With the advanced request to employ a team of robots to perform a task collaboratively, the research community has become increasingly interested in collaborative simultaneous localization and mapping. Unfortunately, existing datasets are limited in the scale and variation of the collaborative trajectories they capture, even though generalization between inter-trajectories among different agents is crucial to the overall viability of collaborative tasks. To help align the research community's contributions with real-world multiagent ordinated SLAM problems, we introduce S3E, a novel large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor scenes that each exceed 200 seconds, consisting of well synchronized and calibrated high-quality stereo camera, LiDAR, and high-frequency IMU data. Crucially, our effort exceeds previous attempts regarding dataset size, scene variability, and complexity. It has 4x as much average recording time as the pioneering EuRoC dataset. We also provide careful dataset analysis as well as baselines for collaborative SLAM and single counterparts. Find data, code, and more up-to-date information at https://github.com/PengYu-Team/S3E.
翻译:由于预先要求雇用一组机器人来协同执行任务,研究界对协同同时进行本地化和绘图越来越感兴趣,不幸的是,现有的数据集在它们所捕捉的协作轨迹的规模和变化方面是有限的,尽管在不同代理人之间对轨道之间的一般化对于协作任务的总体可行性至关重要。为了帮助研究界的贡献与现实世界多试剂协调的SLAM问题相协调,我们引入了S3E,这是由一支无人驾驶地面车辆车队按照四个设计的合作轨迹模式收集的新型大型多式联运数据集。S3E由7个室外和5个室内场景组成,每个场景均超过200秒,其中包括同步和校准的高质量立体摄像机、LiDAR和高频度IMU数据。奇怪的是,我们的努力超过了以往关于数据集大小、场景变异性和复杂性的尝试。我们平均有4x个记录时间作为开创性的EuRoC数据集。我们还提供谨慎的数据数据集分析,以及合作性SLMM和单一对应方的基线。查找数据、代码和更多更新的AS-YGUP/Team/com的数据。