Model-based reinforcement learning is one approach to increase sample efficiency. However, the accuracy of the dynamics model and the resulting compounding error over modelled trajectories are commonly regarded as key limitations. A natural question to ask is: How much more sample efficiency can be gained by improving the learned dynamics models? Our paper empirically answers this question for the class of model-based value expansion methods in continuous control problems. Value expansion methods should benefit from increased model accuracy by enabling longer rollout horizons and better value function approximations. Our empirical study, which leverages oracle dynamics models to avoid compounding model errors, shows that (1) longer horizons increase sample efficiency, but the gain in improvement decreases with each additional expansion step, and (2) the increased model accuracy only marginally increases the sample efficiency compared to learned models with identical horizons. Therefore, longer horizons and increased model accuracy yield diminishing returns in terms of sample efficiency. These improvements in sample efficiency are particularly disappointing when compared to model-free value expansion methods. Even though they introduce no computational overhead, we find their performance to be on-par with model-based value expansion methods. Therefore, we conclude that the limitation of model-based value expansion methods is not the model accuracy of the learned models. While higher model accuracy is beneficial, our experiments show that even a perfect model will not provide an un-rivalled sample efficiency but that the bottleneck lies elsewhere.
翻译:以模型为基础的强化学习是提高抽样效率的一种方法。然而,动态模型的准确性以及由此而形成的对模型轨迹的复合错误通常被视为关键的局限性。自然要问的问题是:改进所学动态模型能提高多少样本效率?我们的文件在不断控制问题时,对基于模型的扩展价值方法类别,以经验方式回答了这一问题。通过允许较长的推广前景和更好的价值功能近似值,扩大价值的方法应该从提高模型准确性中获益。我们的经验研究,利用动态模型或触角动态模型避免模型错误的复合性,表明:(1) 更长的视野提高了样本效率,但改进的收益随着每个额外的扩展步骤而减少,以及(2) 与同一视野的学习模型相比,提高模型准确性只能略微提高样本效率。因此,较长的视野和增强模型准确性在抽样效率方面降低了收益。与无模型扩展价值的方法相比,抽样效率的提高尤其令人失望。尽管它们没有引入计算性间接费用,但我们发现它们的业绩与基于模型的扩大价值方法是平行的。因此,我们的结论是,模型的精确性模型的精确性模型并不是我们所了解的精确性模型。</s>