Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform. A policy trained with expensive data is rendered useless after making even a minor change to the robot hardware. In this paper, we address the challenging problem of adapting a policy, trained to perform a task, to a novel robotic hardware platform given only few demonstrations of robot motion trajectories on the target robot. We formulate it as a few-shot meta-learning problem where the goal is to find a meta-model that captures the common structure shared across different robotic platforms such that data-efficient adaptation can be performed. We achieve such adaptation by introducing a learning framework consisting of a probabilistic gradient-based meta-learning algorithm that models the uncertainty arising from the few-shot setting with a low-dimensional latent variable. We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots generated by varying the physical parameters of an existing set of robotic platforms. Our results show that the proposed method can successfully adapt a trained policy to different robotic platforms with novel physical parameters and the superiority of our meta-learning algorithm compared to state-of-the-art methods for the introduced few-shot policy adaptation problem.
翻译:强化强化学习方法可以取得显著的性能,但需要在同一机器人平台上收集大量培训数据。在对机器人硬件稍作改动后,经过昂贵数据培训的政策便变得毫无用处。在本文中,我们解决了将一项经过训练可以执行任务的政策改编成新的机器人硬件平台的挑战性问题,因为只展示了目标机器人上机器人运动轨迹的少数例子。我们把它发展成一个微小的元学习问题,目标是找到一个元模型,捕捉不同机器人平台共享的共同结构,以便进行数据高效的适应。我们通过引入一个由概率性梯度基元学习算法组成的学习框架来实现这种改编。这个框架是用来用一个低维潜伏变量模型模拟定位和真实机器人选择任务的框架。我们实验性地评估了400个模拟定位和实时机器人选择任务的框架,这些模型是现有成套机器人平台物理参数不同产生的。我们的结果显示,拟议的方法可以成功地将经过训练的政策调整到不同的机器人平台上,具有新的物理参数,以及我们元化学习算算法相对于状态的调整问题来说具有优势。