We present VILENS (Visual Inertial Lidar Legged Navigation System), an odometry system for legged robots based on factor graphs. The key novelty is the tight fusion of four different sensor modalities to achieve reliable operation when the individual sensors would otherwise produce degenerate estimation. To minimize leg odometry drift, we extend the robot's state with a linear velocity bias term which is estimated online. This bias is observable because of the tight fusion of this preintegrated velocity factor with vision, lidar, and IMU factors. Extensive experimental validation on different ANYmal quadruped robots is presented, for a total duration of 2 h and 1.8 km traveled. The experiments involved dynamic locomotion over loose rocks, slopes, and mud which caused challenges like slippage and terrain deformation. Perceptual challenges included dark and dusty underground caverns, and open and feature-deprived areas. We show an average improvement of 62% translational and 51% rotational errors compared to a state-of-the-art loosely coupled approach. To demonstrate its robustness, VILENS was also integrated with a perceptive controller and a local path planner.
翻译:我们根据要素图,为腿部机器人提供了VILENS(视觉惰性利达尔腿导航系统),这是一个基于要素图的腿部机器人的奥氏测量系统。 关键的新颖之处是,在单个传感器否则会产生退化估计时,为了实现可靠的操作,四种不同传感器模式的紧凑结合,以实现可靠的操作。 为了尽量减少腿部的偏移, 我们用在线估计的线性速度偏差来扩展机器人的状态。 这种偏差是可见的, 因为这个前集成速度系数与视觉、利达尔和IMU系数的紧紧结合。 对各种心电图四重的机器人进行了广泛的实验性鉴定,总持续时间为2小时和1.8公里。 实验涉及对松散岩石、斜坡和泥土的动态移动,这造成了滑坡和地形变形等挑战。 概念上的挑战包括黑暗和灰色的地下洞穴,以及开放和地貌偏差的区域。 我们显示,与州级松动的方法相比,我们平均改进了62%的翻译差和51%的旋转错误。 为了证明它的坚固性, VILENS(VILEN)也与一个局部控制和局部的路径结合。