Robots often face situations where grasping a goal object is desirable but not feasible due to other present objects preventing the grasp action. We present a deep Reinforcement Learning approach to learn grasping and pushing policies for manipulating a goal object in highly cluttered environments to address this problem. In particular, a dual Reinforcement Learning model approach is proposed, which presents high resilience in handling complicated scenes, reaching an average of 98% task completion using primitive objects in a simulation environment. To evaluate the performance of the proposed approach, we performed two extensive sets of experiments in packed objects and a pile of object scenarios with a total of 1000 test runs in simulation. Experimental results showed that the proposed method worked very well in both scenarios and outperformed the recent state-of-the-art approaches. Demo video, trained models, and source code for the results reproducibility purpose are publicly available. https://github.com/Kamalnl92/Self-Supervised-Learning-for-pushing-and-grasping.
翻译:机器人经常面临这样的情况:由于其他现有物体阻碍抓取动作,抓住一个目标物体是可取的,但并不可行。我们展示了一种深强化学习方法,以学习掌握和推动在高度混乱的环境中操纵一个目标物体的政策来解决这个问题。特别是,提出了一种双强化学习模式方法,在处理复杂场景方面表现出很强的复原力,在模拟环境中使用原始物体平均完成98%的任务。为了评价拟议方法的绩效,我们在包装物体上进行了两套广泛的实验,在模拟中进行了总共1000次试验的一堆物体情景。实验结果显示,拟议的方法在两种情景中都运作良好,并且超过了最近的最先进的方法。可以公开提供Demo视频、经过培训的模型以及成果再生代码。https://github.com/Kamalnl92/self-Supervised-Learch-pushing-and-grasping。