Robots operating in human environments need a variety of skills, like slow and fast walking, turning, and side-stepping. However, building robot controllers that can exhibit such a large range of behaviors is challenging, and unsolved. We present an approach that uses a model-based controller for imitating different animal gaits without requiring any real-world fine-tuning. Unlike previous works that learn one policy per motion, we present a unified controller which is capable of generating four different animal gaits on the A1 robot. Our framework includes a trajectory optimization procedure that improves the quality of real-world imitation. We demonstrate our results in simulation and on a real 12-DoF A1 quadruped robot. Our result shows that our approach can mimic four animal motions, and outperform baselines learned per motion.
翻译:在人类环境中运行的机器人需要各种各样的技能,如慢步和快速行走、转动和侧脚步。然而,建造机器人控制器以展示如此众多的行为是具有挑战性和未解的。 我们展示了一种方法,在不要求任何实际微调的情况下使用模型控制器模仿不同的动物片段。 与以往每次运动学习一种政策的工程不同,我们展示了一个统一的控制器,它能够生成A1机器人上四种不同的动物片段。我们的框架包括一个轨迹优化程序,它能提高真实世界模拟的质量。我们在模拟和真实的12DoF A1机器人上展示了我们的结果。我们的结果显示,我们的方法可以模拟四种动物运动,而超越每个运动的基线。