Reinforcement Learning (RL) has witnessed great strides for quadruped locomotion, with continued progress in the reliable sim-to-real transfer of policies. However, it remains a challenge to reuse a policy on another robot, which could save time for retraining. In this work, we present a framework for zero-shot policy retargeting wherein diverse motor skills can be transferred between robots of different shapes and sizes. The new framework centers on a planning-and-control pipeline that systematically integrates RL and Model Predictive Control (MPC). The planning stage employs RL to generate a dynamically plausible trajectory as well as the contact schedule, avoiding the combinatorial complexity of contact sequence optimization. This information is then used to seed the MPC to stabilize and robustify the policy roll-out via a new Hybrid Kinodynamic (HKD) model that implicitly optimizes the foothold locations. Hardware results show an ability to transfer policies from both the A1 and Laikago robots to the MIT Mini Cheetah robot without requiring any policy re-tuning.
翻译:强化学习(RL)在四重运动中取得了长足的进步,在可靠的模拟到现实的政策转移方面继续取得进展。然而,重新利用另一个机器人的政策仍是一项挑战,这可以节省再培训的时间。在这项工作中,我们提出了一个零点政策重新定位框架,在不同形状和大小的机器人之间可以转让不同的运动技能。新的框架以规划和控制管道为中心,系统地将RL和模型预测控制(MPC)结合起来。规划阶段利用RL生成一个动态的、可信的轨迹和联系时间表,避免接触序列优化的组合复杂性。然后利用这一信息为MPC提供种子,以便通过一个新的混合基诺动力学(HKD)模型来稳定和巩固政策的推出,该模型将隐含优化脚点。硬软件的结果显示,能够将A1和Laikago机器人的政策从A1和Laikago机器人转移到MIT Mini Cheetah机器人,而无需任何政策调整。