We focus on the task of goal-oriented grasping, in which a robot is supposed to grasp a pre-assigned goal object in clutter and needs some pre-grasp actions such as pushes to enable stable grasps. However, in this task, the robot gets positive rewards from environment only when successfully grasping the goal object. Besides, joint pushing and grasping elongates the action sequence, compounding the problem of reward delay. Thus, sample inefficiency remains a main challenge in this task. In this paper, a goal-conditioned hierarchical reinforcement learning formulation with high sample efficiency is proposed to learn a push-grasping policy for grasping a specific object in clutter. In our work, sample efficiency is improved by two means. First, we use a goal-conditioned mechanism by goal relabeling to enrich the replay buffer. Second, the pushing and grasping policies are respectively regarded as a generator and a discriminator and the pushing policy is trained with supervision of the grasping discriminator, thus densifying pushing rewards. To deal with the problem of distribution mismatch caused by different training settings of two policies, an alternating training stage is added to learn pushing and grasping in turn. A series of experiments carried out in simulation and real world indicate that our method can quickly learn effective pushing and grasping policies and outperforms existing methods in task completion rate and goal grasp success rate by less times of motion. Furthermore, we validate that our system can also adapt to goal-agnostic conditions with better performance. Note that our system can be transferred to the real world without any fine-tuning. Our code is available at https://github.com/xukechun/Efficient_goal-oriented_push-grasping_synergy.
翻译:我们注重的是面向目标的掌握,在这个任务中,一个机器人应该抓住一个在杂乱中预设的目标对象,需要一些预选的动作,例如推推来稳定抓住。但是,在这个任务中,机器人只有在成功地抓住目标对象时,才从环境得到积极的回报。此外,联合推拉和抓住行动序列,这加剧了奖励拖延问题。因此,抽样效率低下仍然是本任务中的一项主要挑战。在本文件中,建议采用一个基于目标的等级强化学习公式,其抽样效率高,以学习一种推式调整政策,以掌握一个特定对象,例如推动,以稳定抓住。在我们的工作中,抽样效率通过两种方式提高。首先,我们使用一个有目标的固定机制,通过重新标注来丰富缓冲。第二,推力和把握政策分别被视为一个发电机和制导师和制导师,从而降低奖励率。为了处理由两种不同的培训环境导致的分布不匹配问题,在交替的操作中,一个交替式的培训阶段是推动,在推动中,在推动中,推移动中,推动和推移到我们目前的方法可以快速地推倒。