Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.
翻译:设计并分析基于模型的RL(MBRL)算法,保证单调改进,这主要由于政策优化和模型学习之间的相互依存关系,一直具有挑战性。现有的差异界限一般忽视模式转变的影响,而相应的算法容易通过急剧的模型更新而降低性能。在这项工作中,我们首先为MBRL的非递减性性性能保证提出了一个新颖和一般性的理论计划。我们的后续衍生界限揭示了模式转变和性能改进之间的关系。这些发现鼓励我们提出一个限制较低的优化优化问题,以便允许MBRL的单一性能。另一个实例表明,从动态变化式的勘探次数中学习模型有利于最终的回报。我们受这些分析的驱动,我们设计了一个简单而有效的CMLO算法(经调整的模型调整的低调调调的Optimization),方法是引入一个可灵活决定何时更新模型的事件错开机制。实验表明,CMLO超越了其他最先进的方法,并在采用各种政策优化方法时产生促进作用。