Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance (e.g., lower MSE) are not necessarily better for control: an RL agent may seek out the small fraction of states where an accurate model makes mistakes, or it might act in ways that do not expose the errors of an inaccurate model. As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them. In this work, we propose a single objective for jointly training the model and the policy, such that updates to either component increase a lower bound on expected return. To the best of our knowledge, this is the first lower bound for model-based RL that holds globally and can be efficiently estimated in continuous settings; it is the only lower bound that mends the objective mismatch problem. A version of this bound becomes tight under certain assumptions. Optimizing this bound resembles a GAN: a classifier distinguishes between real and fake transitions, the model is updated to produce transitions that look realistic, and the policy is updated to avoid states where the model predictions are unrealistic. Numerical simulations demonstrate that optimizing this bound yields reward maximizing policies and yields dynamics that (perhaps surprisingly) can aid in exploration. We also show that a deep RL algorithm loosely based on our lower bound can achieve performance competitive with prior model-based methods, and better performance on certain hard exploration tasks.
翻译:许多基于模型的强化学习(RL)方法遵循一个类似的模板:将模型与先前观察到的数据相匹配,然后将该模型的数据用于RL或规划。然而,实现更好的培训业绩(例如较低的MSE)的模型不一定比控制更好:一个RL代理商可能会寻找精确模型出错的一小部分国家,或者它可能以不暴露不准确模型错误的方式行事。正如先前工作所指出,存在一种客观的不匹配:如果模型产生良好的政策,则其有用,但经过培训以尽量提高其准确性,而不是由此产生的政策绩效。在这项工作中,我们提出一个联合培训模型和政策的单一目标,这样对其中任何一个部分的更新会降低预期回报的束缚。据我们所知,这是基于模型的RL的第一个更低的国家,它会保持全球范围,并且可以在连续环境中有效估计;只有更低的模型才能修正客观的不匹配问题。在某种假设下,这种模型的版本变得紧凑凑紧。在某种模型上类似于GAN:在一种符合现实的勘探任务中进行一个升级的升级和虚拟的过渡。