This work reports on developing a deep learning-based contact estimator for legged robots that bypasses the need for physical contact sensors and takes multi-modal proprioceptive sensory data from joint encoders, kinematics, and an inertial measurement unit as input. Unlike vision-based state estimators, proprioceptive state estimators are agnostic to perceptually degraded situations such as dark or foggy scenes. For legged robots, reliable kinematics and contact data are necessary to develop a proprioceptive state estimator. While some robots are equipped with dedicated contact sensors or springs to detect contact, some robots do not have dedicated contact sensors, and the addition of such sensors is non-trivial without redesigning the hardware. The trained deep network can accurately estimate contacts on different terrains and robot gaits and is deployed along a contact-aided invariant extended Kalman filter to generate odometry trajectories. The filter performs comparably to a state-of-the-art visual SLAM system.
翻译:这项工作报告了为腿型机器人开发一个深层次的基于学习的接触测深仪,该测深仪绕过对物理接触传感器的需要,并从联合编码器、运动学和惯性测量单元中提取多式自动感官数据作为输入。与基于视觉的状态测深仪不同,自动感知状态测深仪对于深雾场景等感知退化情形是不可知的。对腿型机器人而言,可靠的运动学和接触数据对于开发一种自主感测状态测深仪是必要的。虽然一些机器人配备了专用的接触传感器或弹簧来探测接触,但有些机器人没有专用的接触传感器,而添加的这种传感器没有重新设计硬件,是非三角的。经过训练的深层网络可以准确估计不同地形和机器人图案的接触,并安装在一种由接触辅助的惯性卡尔曼扩展过滤器上,以产生odorical trajector。过滤器可以对州级的视觉SLM系统进行可比较。