This work developed a learning framework for perceptive legged locomotion that combines visual feedback, proprioceptive information, and active gait regulation of foot-ground contacts. The perception requires only one forward-facing camera to obtain the heightmap, and the active regulation of gait paces and traveling velocity are realized through our formulation of CPG-based high-level imitation of foot-ground contacts. Through this framework, an end-user has the ability to command task-level inputs to control different walking speeds and gait frequencies according to the traversal of different terrains, which enables more reliable negotiation with encountered obstacles. The results demonstrated that the learned perceptive locomotion policy followed task-level control inputs with intended behaviors, and was robust in presence of unseen terrains and external force perturbations. A video demonstration can be found at https://youtu.be/OTzlWzDfAe8, and the codebase at https://github.com/jennyzzt/perceptual-locomotion.
翻译:这项工作为视觉反馈、自我感知信息以及积极控制脚下接触而开发了一个认知脚下运动的学习框架,将视觉反馈、自我感知信息和主动动作调节结合起来。感知只需要一台前视摄像头才能获得高度映射,并且通过我们开发基于CPG的高水平脚下接触模拟来积极调节行进速度和移动速度。通过这个框架,最终用户有能力根据不同地形的轨迹,指挥任务级别的投入,以控制不同行走速度和行走频率,从而能够以遇到的障碍进行更可靠的谈判。结果显示,所学的感知移动政策遵循了与预期行为有关的任务级控制投入,并且在有看不见的地形和外部力量的侵扰下非常活跃。在https://youtu.be/OTzlWzDfAe8和https://github.com/jennyzzt/percepual-locopretion的代码库中可以找到一个视频演示。