We propose Super Odometry, a high-precision multi-modal sensor fusion framework, providing a simple but effective way to fuse multiple sensors such as LiDAR, camera, and IMU sensors and achieve robust state estimation in perceptually-degraded environments. Different from traditional sensor-fusion methods, Super Odometry employs an IMU-centric data processing pipeline, which combines the advantages of loosely coupled methods with tightly coupled methods and recovers motion in a coarse-to-fine manner. The proposed framework is composed of three parts: IMU odometry, visual-inertial odometry, and laser-inertial odometry. The visual-inertial odometry and laser-inertial odometry provide the pose prior to constrain the IMU bias and receive the motion prediction from IMU odometry. To ensure high performance in real-time, we apply a dynamic octree that only consumes 10 % of the running time compared with a static KD-tree. The proposed system was deployed on drones and ground robots, as part of Team Explorer's effort to the DARPA Subterranean Challenge where the team won $1^{st}$ and $2^{nd}$ place in the Tunnel and Urban Circuits, respectively.
翻译:我们提议采用超精密多式传感器聚合框架,即超精密多式测量,为连接LIDAR、摄像和IMU等多重传感器提供简单而有效的方法,并在感官退化的环境中实现强力状态估计。不同于传统的传感器融合方法,超多式测量采用以IMU为中心的数据处理管道,该管道结合松散结合方法的优势,同时采用紧密结合的方法,以粗略到软体的方式恢复运动。拟议框架由三部分组成:IMUodology、视觉-内皮测量和激光-内皮测量。视觉内皮测量和激光-内皮测量仪提供了在概念退化环境中实现稳健的状态估计。在限制IMU偏差之前,Super Odograph使用一种以IMU为中心的运动预测。为了确保实时高性能,我们采用了一种动态的树,与静态KD-树相比,仅消耗运行时间的10%。拟议系统部署在无人机和地面机器人上,作为探险队努力的一部分,用于探险队和地铁路路基团队(美元) 和地铁(DAR_) 地铁) 地平地平地平地平地铁团队。