Model-based meta-reinforcement learning (RL) methods have recently shown to be a promising approach to improving the sample efficiency of RL in multi-task settings. However, the theoretical understanding of those methods is yet to be established, and there is currently no theoretical guarantee of their performance in a real-world environment. In this paper, we analyze the performance guarantee of model-based meta-RL methods by extending the theorems proposed by Janner et al. (2019). On the basis of our theoretical results, we propose Meta-Model-Based Meta-Policy Optimization (M3PO), a model-based meta-RL method with a performance guarantee. We demonstrate that M3PO outperforms existing meta-RL methods in continuous-control benchmarks.
翻译:以模型为基础的元加强学习方法(RL)最近证明是提高多任务环境中RL抽样效率的一个很有希望的方法,然而,对这些方法的理论理解尚有待确定,目前尚无法从理论上保证其在现实环境中的表现。在本文件中,我们通过扩展Janner等人(2019年)提出的理论原理,分析以模型为基础的元加强学习方法的绩效保障。根据我们的理论结果,我们提议采用Meta-Model-Based Meta-Policy优化方法(M3PO),这是一种以模型为基础的元加强方法,并附有绩效保证。我们证明M3PO在持续控制基准中优于现有的元-RL方法。