Recent advances in unsupervised representation learning significantly improved the sample efficiency of training Reinforcement Learning policies in simulated environments. However, similar gains have not yet been seen for real-robot reinforcement learning. In this work, we focus on enabling data-efficient real-robot learning from pixels. We present Contrastive Pre-training and Data Augmentation for Efficient Robotic Learning (CoDER), a method that utilizes data augmentation and unsupervised learning to achieve sample-efficient training of real-robot arm policies from sparse rewards. While contrastive pre-training, data augmentation, demonstrations, and reinforcement learning are alone insufficient for efficient learning, our main contribution is showing that the combination of these disparate techniques results in a simple yet data-efficient method. We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels, such as reaching, picking, moving, pulling a large object, flipping a switch, and opening a drawer in just 30 minutes of mean real-world training time. We include videos and code on the project website: https://sites.google.com/view/efficient-robotic-manipulation/home
翻译:在未经监督的代理学习方面最近取得的进展大大提高了模拟环境中培训强化学习政策的抽样效率;然而,在实际机器人强化学习方面,尚未看到类似的成果;在这项工作中,我们侧重于从像素中促进数据高效真实机器人学习。我们展示了高效机器人学习的对比性培训前和数据强化(CoDER)方法,这种方法利用数据增强和未经监督的学习,从微弱的奖励中获得对实机器人臂政策进行抽样高效的培训。虽然对比性培训前培训、数据增强、演示和强化学习单是不足以有效学习的,但我们的主要贡献是表明,这些不同技术的结合导致一种简单但数据高效的方法。我们显示,由于只有10个演示,一个单一机械臂能够从像素中学习稀疏的操纵政策,例如接触、采集、移动、拉动一个大天体、翻转动一个开关、在实际世界平均培训时间仅30分钟内打开抽屉。我们在项目网站上包括视频和代码:https://site-gology/gologlegle/view-parable-traction-