Mastering robotic manipulation skills through reinforcement learning (RL) typically requires the design of shaped reward functions. Recent developments in this area have demonstrated that using sparse rewards, i.e. rewarding the agent only when the task has been successfully completed, can lead to better policies. However, state-action space exploration is more difficult in this case. Recent RL approaches to learning with sparse rewards have leveraged high-quality human demonstrations for the task, but these can be costly, time consuming or even impossible to obtain. In this paper, we propose a novel and effective approach that does not require human demonstrations. We observe that every robotic manipulation task could be seen as involving a locomotion task from the perspective of the object being manipulated, i.e. the object could learn how to reach a target state on its own. In order to exploit this idea, we introduce a framework whereby an object locomotion policy is initially obtained using a realistic physics simulator. This policy is then used to generate auxiliary rewards, called simulated locomotion demonstration rewards (SLDRs), which enable us to learn the robot manipulation policy. The proposed approach has been evaluated on 13 tasks of increasing complexity, and can achieve higher success rate and faster learning rates compared to alternative algorithms. SLDRs are especially beneficial for tasks like multi-object stacking and non-rigid object manipulation.
翻译:通过强化学习(RL)掌握机器人操纵技能通常需要设计有型的奖赏功能。该领域最近的事态发展表明,使用微薄的奖赏,即只有在任务成功完成时才奖励代理人,可以导致更好的政策。然而,在这种情况下,州际空间探索比较困难。最近以微薄的奖赏来学习机器人操纵技能的做法,利用高品质的人类演示来完成这项任务,但这种展示成本、耗时甚至不可能获得。在本文件中,我们提出了一个不需要人类演示的新颖而有效的方法。我们观察到,从被操纵对象的角度看,每个机器人操作任务都可以被视为涉及移动任务,也就是说,该对象可以自己学习如何达到目标状态。为了利用这一想法,我们引入了一个框架,让物体移动政策最初利用现实的物理学模拟器获得。然后,该政策被用来产生辅助性奖赏,称为模拟移动奖赏(SLDRs),这使我们能够学习机器人操纵政策。我们已对13项目标进行了评估,即越来越复杂的复杂性,即该对象可以自己学习如何达到一个目标状态,并且能够达到更快速的S-rigal oral 工作,例如S-lig-lig-tracal lag-tracal lag-tracal lax lag-lax lag-lag-lag-lax lax lax