Top-performing Model-Based Reinforcement Learning (MBRL) agents, such as Dreamer, learn the world model by reconstructing the image observations. Hence, they often fail to discard task-irrelevant details and struggle to handle visual distractions. To address this issue, previous work has proposed to contrastively learn the world model, but the performance tends to be inferior in the absence of distractions. In this paper, we seek to enhance robustness to distractions for MBRL agents. Specifically, we consider incorporating prototypical representations, which have yielded more accurate and robust results than contrastive approaches in computer vision. However, it remains elusive how prototypical representations can benefit temporal dynamics learning in MBRL, since they treat each image independently without capturing temporal structures. To this end, we propose to learn the prototypes from the recurrent states of the world model, thereby distilling temporal structures from past observations and actions into the prototypes. The resulting model, DreamerPro, successfully combines Dreamer with prototypes, making large performance gains on the DeepMind Control suite both in the standard setting and when there are complex background distractions. Code available at https://github.com/fdeng18/dreamer-pro .
翻译:以模型为基础的顶级强化学习(MBRL)代理商(如Dreamer)通过重建图像观测来学习世界模型。 因此,他们往往不能丢弃任务相关细节,而努力处理视觉分心。 为了解决这个问题,先前的工作建议对比地学习世界模型,但是在没有分心的情况下,业绩往往较差。 在本文中,我们力求加强强健性,分散MBRL代理商的注意力。 具体地说,我们考虑纳入原型代表,这比计算机视觉中的对比方法产生更准确、更强有力的结果。然而,仍然难以看出原型代表如何能够让MBRL的时空动态学习受益,因为它们独立地对待每张图像,而不捕捉时间结构。 为此,我们提议从世界模型的经常性状态中学习原型,从而将以往观察和行动的时间结构蒸馏入原型。 由此产生的模型DremerPro成功将Dreger与原型组合在一起,在标准设置中和有复杂背景分心的DepMind 控制套房产生很大的绩效收益。