This work develops a learning-based contact estimator for legged robots that bypasses the need for physical sensors and takes multi-modal proprioceptive sensory data as input. Unlike vision-based state estimators, proprioceptive state estimators are agnostic to perceptually degraded situations such as dark or foggy scenes. While some robots are equipped with dedicated physical sensors to detect necessary contact data for state estimation, some robots do not have dedicated contact sensors, and the addition of such sensors is non-trivial without redesigning the hardware. The trained network can estimate contact events on different terrains. The experiments show that a contact-aided invariant extended Kalman filter can generate accurate odometry trajectories compared to a state-of-the-art visual SLAM system, enabling robust proprioceptive odometry.
翻译:这项工作为绕过物理传感器需要并采用多模式自行感知感官数据的腿式机器人开发了基于学习的接触估计器。 与基于视觉的状态估计器不同, 自行感知状态估计器对感知退化情况( 如黑暗或雾景象)具有不可知性。 虽然一些机器人配备了专门的物理传感器以探测必要的接触数据以进行国家估计, 但一些机器人没有专门的接触传感器, 添加这种传感器并不具有三进制性, 不重新设计硬件。 受过训练的网络可以估计不同地形的接触事件。 实验显示, 与最先进的视觉 SLM 系统相比, 接触辅助的动态卡尔曼扩展过滤器可以生成准确的odorophic 轨迹, 从而能够实现稳健的自知性odiophy odology。