We propose a novel approach to addressing two fundamental challenges in Model-based Reinforcement Learning (MBRL): the computational expense of repeatedly finding a good policy in the learned model, and the objective mismatch between model fitting and policy computation. Our "lazy" method leverages a novel unified objective, Performance Difference via Advantage in Model, to capture the performance difference between the learned policy and expert policy under the true dynamics. This objective demonstrates that optimizing the expected policy advantage in the learned model under an exploration distribution is sufficient for policy computation, resulting in a significant boost in computational efficiency compared to traditional planning methods. Additionally, the unified objective uses a value moment matching term for model fitting, which is aligned with the model's usage during policy computation. We present two no-regret algorithms to optimize the proposed objective, and demonstrate their statistical and computational gains compared to existing MBRL methods through simulated benchmarks.
翻译:我们建议一种新颖的方法来解决基于模型的强化学习(MBRL)中的两项基本挑战:在所学模型中反复寻找良好政策的计算费用,以及模型安装与政策计算之间的客观不匹配。我们的“懒惰”方法利用一个全新的统一目标,即“通过模型中的优势实现绩效差异 ”, 来捕捉在真实动态下所学政策和专家政策之间的业绩差异。 这一目标表明,在勘探分布下优化所学模型中的预期政策优势足以用于政策计算,从而大大提升与传统规划方法相比的计算效率。 此外,统一目标对模型安装使用一个与模型在政策计算中的用法相一致的值时比术语。 我们提出了两个无差别的算法,以优化拟议目标,并通过模拟基准来显示其与现行MBRL方法相比的统计和计算收益。</s>