Over the last few years, we have not seen any major developments in model-free or model-based learning methods that would make one obsolete relative to the other. In most cases, the used technique is heavily dependent on the use case scenario or other attributes, e.g. the environment. Both approaches have their own advantages, for example, sample efficiency or computational efficiency. However, when combining the two, the advantages of each can be combined and hence achieve better performance. The TD-MPC framework is an example of this approach. On the one hand, a world model in combination with model predictive control is used to get a good initial estimate of the value function. On the other hand, a Q function is used to provide a good long-term estimate. Similar to algorithms like MuZero a latent state representation is used, where only task-relevant information is encoded to reduce the complexity. In this paper, we propose the use of a reconstruction function within the TD-MPC framework, so that the agent can reconstruct the original observation given the internal state representation. This allows our agent to have a more stable learning signal during training and also improves sample efficiency. Our proposed addition of another loss term leads to improved performance on both state- and image-based tasks from the DeepMind-Control suite.
翻译:过去几年,我们没有看到任何可以使一种学习方法相对于另一种过时的重大发展。在大多数情况下,所使用的技术严重依赖于使用情况场景或其他属性,例如环境。两种方法都各自有优势,例如样本效率或计算效率。然而,当将两种方法相结合时,可以结合每种方法的优势,从而实现更好的性能。TD-MPC框架就是这种方法的一个例子。一方面,使用世界模型结合模型预测控制来获得良好的价值函数初始估计。另一方面,使用Q函数来提供良好的长期估计。类似于MuZero这样的算法,使用潜在状态表示,只编码与任务相关的信息,以减少复杂性。在本文中,我们提出在TD-MPC框架中使用重构函数,使代理可以在给定内部状态表示的情况下重构原始观测值。这使得我们的代理在训练期间具有更稳定的学习信号,并提高了样本效率。我们提出的增加了另一个损失项的方法在DeepMind-Control套件的基于状态和图像的任务上都具有改进的表现。