Learning models of the environment from pure interaction is often considered an essential component of building lifelong reinforcement learning agents. However, the common practice in model-based reinforcement learning is to learn models that model every aspect of the agent's environment, regardless of whether they are important in coming up with optimal decisions or not. In this paper, we argue that such models are not particularly well-suited for performing scalable and robust planning in lifelong reinforcement learning scenarios and we propose new kinds of models that only model the relevant aspects of the environment, which we call "minimal value-equivalent partial models". After providing a formal definition for these models, we provide theoretical results demonstrating the scalability advantages of performing planning with such models and then perform experiments to empirically illustrate our theoretical results. Then, we provide some useful heuristics on how to learn these kinds of models with deep learning architectures and empirically demonstrate that models learned in such a way can allow for performing planning that is robust to distribution shifts and compounding model errors. Overall, both our theoretical and empirical results suggest that minimal value-equivalent partial models can provide significant benefits to performing scalable and robust planning in lifelong reinforcement learning scenarios.
翻译:从纯粹的互动中学习环境模型往往被认为是建设终身强化学习动力的基本组成部分,然而,基于模型的强化学习常见做法是学习模型,模型模拟代理人环境的方方面面,而不论这些模型对于做出最佳决定是否重要。在本文件中,我们争辩说,这些模型并不特别适合在终身强化学习情景中进行可扩展和稳健的规划,我们提出了新型模型,这些模型只模拟环境的相关方面,我们称之为“最小值等值部分模型”。在为这些模型提供一个正式定义之后,我们提供了理论结果,表明使用这些模型进行规划的可扩展性优势,然后进行实验,从经验上说明我们的理论结果。然后,我们提供了一些有用的理论理论理论理论,说明如何用深层学习架构学习这些类型的模型,从经验上证明,以这种方式学习的模式能够进行强有力的规划,以适应分布变化和使模型错误复化。总体而言,我们的理论和实验结果表明,最低值等值部分模型能够为在终身强化学习情景中进行可扩展和稳健规划提供显著的惠益。