Not until recently, robust robot locomotion has been achieved by deep reinforcement learning (DRL). However, for efficient learning of parametrized bipedal walking, developed references are usually required, limiting the performance to that of the references. In this paper, we propose to design an adaptive reward function for imitation learning from the references. The agent is encouraged to mimic the references when its performance is low, while to pursue high performance when it reaches the limit of references. We further demonstrate that developed references can be replaced by low-quality references that are generated without laborious tuning and infeasible to deploy by themselves, as long as they can provide a priori knowledge to expedite the learning process.
翻译:直至最近,强健的机器人运动尚未通过深层强化学习(DRL)实现。然而,为了高效地学习平衡的双脚步行,通常需要开发参考文献,将性能限制在参考文献上。在本文中,我们提议设计一个适应性奖励功能,用于模仿从参考文献中学习。鼓励该代理机构在其性能低时模仿这些参考文献,同时在达到参考文献极限时追求高性能。我们进一步证明,开发的参考文献可以被低质量参考文献所取代,这些参考文献在没有艰苦调整和无法自行部署的情况下生成的低质量参考文献,只要它们能够提供事先知识来加快学习进程。