This work developed a learning framework for perceptive legged locomotion that combines visual feedback, proprioceptive information, and active gait regulation of foot-ground contacts. The perception requires only one forward-facing camera to obtain the heightmap, and the active regulation of gait paces and traveling velocity are realized through our formulation of CPG-based high-level imitation of foot-ground contacts. Through this framework, an end-user has the ability to command task-level inputs to control different walking speeds and gait frequencies according to the traversal of different terrains, which enables more reliable negotiation with encountered obstacles. The results demonstrated that the learned perceptive locomotion policy followed task-level control inputs with intended behaviors and was robust in presence of unseen terrains and external force perturbations. A video of the project can be found at https://youtu.be/OTzlWzDfAe8.
翻译:这项工作为视觉反馈、自我感知信息、以及积极控制脚下接触的动作开发了一个学习框架,将视觉反馈、自我感知信息和主动动作调节结合起来。感知只需要一台前视摄像头才能获得高度映射,并且通过我们开发基于CPG的高水平脚下接触模拟来积极调节行进速度和移动速度。通过这个框架,终端用户有能力根据不同地形的轨迹,指挥任务级别的投入,以控制不同行走速度和行走频率,从而能够与遇到的障碍进行更可靠的谈判。结果显示,所学的感知移动政策遵循了任务一级的控制投入以及预想的行为,在有看不见的地形和外部力量的侵扰下非常活跃。该项目的视频可见于https://youtu.be/OTzlWzDfAe8。