Deep neural networks have been successful in many reinforcement learning settings. However, compared to human learners they are overly data hungry. To build a sample-efficient world model, we apply a transformer to real-world episodes in an autoregressive manner: not only the compact latent states and the taken actions but also the experienced or predicted rewards are fed into the transformer, so that it can attend flexibly to all three modalities at different time steps. The transformer allows our world model to access previous states directly, instead of viewing them through a compressed recurrent state. By utilizing the Transformer-XL architecture, it is able to learn long-term dependencies while staying computationally efficient. Our transformer-based world model (TWM) generates meaningful, new experience, which is used to train a policy that outperforms previous model-free and model-based reinforcement learning algorithms on the Atari 100k benchmark.
翻译:深神经网络在许多强化学习环境中都取得了成功。 然而, 与人类学习者相比, 他们的数据过于饥饿。 为了构建一个样本高效的世界模型, 我们以自动递减的方式将变压器应用到现实世界中: 不仅将紧凑的潜伏状态和已采取的行动以及经验丰富或预知的回报注入变压器, 这样它就可以在不同的时间步骤灵活地关注所有三种模式。 变压器允许我们的世界模型直接访问前三个国家, 而不是通过压缩的经常性状态查看它们。 通过使用变压器- XL 架构, 它能够学习长期依赖性, 同时保持计算效率。 我们基于变压器的世界模型( TWM) 生成了有意义的新经验, 用于培训比Atari 100k 基准上以前的无模式和基于模型的强化学习算法更完善的政策。</s>