In recent years, domain randomization has gained a lot of traction as a method for sim-to-real transfer of reinforcement learning policies in robotic manipulation; however, finding optimal randomization distributions can be difficult. In this paper, we introduce DROPO, a novel method for estimating domain randomization distributions for safe sim-to-real transfer. Unlike prior work, DROPO only requires a limited, precollected offline dataset of trajectories, and explicitly models parameter uncertainty to match real data. We demonstrate that DROPO is capable of recovering dynamic parameter distributions in simulation and finding a distribution capable of compensating for an unmodelled phenomenon. We also evaluate the method in two zero-shot sim-to-real transfer scenarios, showing successful domain transfer and improved performance over prior methods.
翻译:近年来,域随机化作为一种在机器人操作中模拟到实际传输强化学习政策的方法,获得了许多牵引力;然而,找到最佳随机化分布可能很困难。在本论文中,我们引入了DROPO,这是估算安全模拟到真实传输的域随机化分布的一种新颖方法。与以往的工作不同,DROPO只要求有限的、预先收集的轨道离线数据集,以及明确的模型参数不确定性以匹配真实数据。我们证明DROPO有能力在模拟中恢复动态参数分布,并找到能够补偿非模型化现象的分布。我们还评估了两种零光速模拟到真实传输情景中的方法,展示了成功的域转移,并改进了以往方法的性能。