Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance (e.g., lower MSE) are not necessarily better for control: an RL agent may seek out the small fraction of states where an accurate model makes mistakes, or it might act in ways that do not expose the errors of an inaccurate model. As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them. In this work, we propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return. This joint optimization mends the objective mismatch in prior work. Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions. The resulting algorithm (MnM) is conceptually similar to a GAN: a classifier distinguishes between real and fake transitions, the model is updated to produce transitions that look realistic, and the policy is updated to avoid states where the model predictions are unrealistic.
翻译:许多基于模型的强化学习(RL)方法遵循一个类似的模板:将模型与先前观察到的数据相匹配,然后将该模型的数据用于RL或规划。然而,实现更好的培训业绩的模型(例如较低的MSE)不一定比控制更好:一个RL代理可能找出精确模型出错的一小部分国家,或者其行动方式可能不会暴露不准确模型的错误。正如先前工作所指出,存在一种客观的不匹配:如果模型产生良好的政策,它们就非常有用,但是它们经过培训以尽量提高准确性,而不是从中得出的政策绩效。在这项工作中,我们提出了一个联合培训模型和政策的单一目标,这样对其中任何一个部分的更新会降低预期回报的约束力。这种联合优化使先前工作中的目标不匹配。我们的目标是在预期回报方面全球范围更低,而在某些假设下,这种约束就变得紧密。由此产生的算法(MnM)在概念上类似于GAN:将真实和假的过渡加以区分,而模型则是在概念上加以更新,从而产生不现实的过渡,而政策是避免了不现实的预测。