Recently, deep reinforcement learning (RL) has shown some impressive successes in robotic manipulation applications. However, training robots in the real world is nontrivial owing to sample efficiency and safety concerns. Sim-to-real transfer is proposed to address the aforementioned concerns but introduces a new issue called the reality gap. In this work, we introduce a sim-to-real learning framework for vision-based assembly tasks and perform training in a simulated environment by employing inputs from a single camera to address the aforementioned issues. We present a domain adaptation method based on cycle-consistent generative adversarial networks (CycleGAN) and a force control transfer approach to bridge the reality gap. We demonstrate that the proposed framework trained in a simulated environment can be successfully transferred to a real peg-in-hole setup.
翻译:最近,深层强化学习(RL)在机器人操纵应用方面取得了一些令人印象深刻的成功,然而,由于抽样的效率和安全考虑,在现实世界中培训机器人是非边际的。建议进行模拟到实际的转让,以解决上述关切,但引入了一个称为现实差距的新问题。在这项工作中,我们为基于愿景的组装任务引入了模拟学习框架,并通过使用单一相机的投入在模拟环境中开展培训,以解决上述问题。我们提出了一个基于循环一致的基因对抗网络(CycleGAN)的域适应方法,以及一种缩小现实差距的武力控制转移方法。我们证明,在模拟环境中培训的拟议框架可以成功转换到真正的孔中设置。