We present a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
翻译:我们提出了一个框架,将强化学习(RL)作为一个序列建模问题进行总结。 这使我们能够利用变异器结构的简单性和可缩放性,以及GPT-x和BERT等语言建模的相关进步。 特别是,我们提出了决定变异器,这是一个将RL问题作为有条件的序列建模的架构。 与以前适合价值函数或计算政策梯度的RL方法不同,决定变异器只是利用一个因果遮蔽的变异器来输出最佳行动。 通过对想要的回报(回移)、过去状态和行动设置一个自动递增模型,我们的决定变异器模型可以产生未来实现预期回报的行动。 尽管它很简单,但决定变异器匹配或超过阿塔里、 OpenAI Gym 和 Key-to-Door 任务等最先进的无型的脱线 RL基线的性能。