Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks. It requires massive amounts of data drawn from training tasks to infer the common structure shared among tasks. Without heavy reward engineering, the sparse rewards in long-horizon tasks exacerbate the problem of sample efficiency in meta-RL. Another challenge in meta-RL is the discrepancy of difficulty level among tasks, which might cause one easy task dominating learning of the shared policy and thus preclude policy adaptation to new tasks. This work introduces a novel objective function to learn an action translator among training tasks. We theoretically verify that the value of the transferred policy with the action translator can be close to the value of the source policy and our objective function (approximately) upper bounds the value difference. We propose to combine the action translator with context-based meta-RL algorithms for better data collection and more efficient exploration during meta-training. Our approach empirically improves the sample efficiency and performance of meta-RL algorithms on sparse-reward tasks.
翻译:元强化学习(meta-RL)旨在学习一项政策,同时解决一系列培训任务,并迅速适应新的任务,这需要从培训任务中抽取大量数据,以推断任务之间的共同结构。如果没有大量的奖励工程,长期横向任务的微弱回报会加剧元RL的抽样效率问题。元RL的另一项挑战在于任务之间的困难程度差异,这可能造成一个简单的任务,主导对共同政策的学习,从而排除对新任务的政策调整。这项工作引入了一个新颖的目标功能,在培训任务中学习一个行动翻译。我们理论上核实,行动翻译所转让的政策的价值可能接近源政策的价值,而我们的目标功能(约)将价值差异(约)置于上限。我们提议将行动翻译与基于背景的元RL算法结合起来,以便在元培训期间更好地收集数据和更有效地探索。我们的方法通过经验改进了在稀疏调任务上的元-RL算法的抽样效率和性。