This paper introduces a novel proprioceptive state estimator for legged robots based on a learned displacement measurement from IMU data. Recent research in pedestrian tracking has shown that motion can be inferred from inertial data using convolutional neural networks. A learned inertial displacement measurement can improve state estimation in challenging scenarios where leg odometry is unreliable, such as slipping and compressible terrains. Our work learns to estimate a displacement measurement from IMU data which is then fused with traditional leg odometry. Our approach greatly reduces the drift of proprioceptive state estimation, which is critical for legged robots deployed in vision and lidar denied environments such as foggy sewers or dusty mines. We compared results from an EKF and an incremental fixed-lag factor graph estimator using data from several real robot experiments crossing challenging terrains. Our results show a reduction of relative pose error by 37% in challenging scenarios when compared to a traditional kinematic-inertial estimator without learned measurement. We also demonstrate a 22% reduction in error when used with vision systems in visually degraded environments such as an underground mine.
翻译:本文根据IMU 数据所学的迁移测量方法,为腿部机器人引入了一个新的自我感知状态测算器。 最近对行人跟踪的研究表明,运动可以从使用进化神经网络的惯性数据中推断出来。 学习的惯性迁移测量方法可以改善对具有挑战性的情景的状态估计, 即腿部测量方法不可靠, 如滑动和压缩地形。 我们的工作是从IMU数据中估算迁移测量方法, 该方法随后与传统的腿测量方法相结合。 我们的方法极大地减少了自行感知状态估测的漂移,这对于在视野和利达尔被排除的环境中部署的腿部机器人至关重要, 如雾性下水道或灰色地雷。 我们用一些真实机器人在挑战地形上实验的数据, 来比较了 EKF 和 递增的固定系数图测算器的结果。 我们的结果显示,在挑战情景中,与传统的运动神经测算器相比,相对构成的误差减少了37% 。 我们还表明,在像地下矿井一样的可见退化环境中与视觉系统一起使用的误差减少了22%。