A longstanding goal of the field of AI is a strategy for compiling diverse experience into a highly capable, generalist agent. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model - with a single set of weights - trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction. Additional information, videos and code can be seen at: sites.google.com/view/multi-game-transformers
翻译:AI领域的一个长期目标是将不同经验汇编成一个高度有能力的通用工具的战略。在视觉和语言的子领域,这在很大程度上是通过扩大以变压器为基础的模型和在大型、多样化的数据集中对其进行培训来实现的。受这一进展的驱动,我们调查是否可以利用同样的战略来产生通用强化学习剂。具体地说,我们显示,一个单一的变压器模型――具有一套单一的重量――经过培训的纯离线模型,可以在接近人时同时播放多达46场阿塔里游戏的套装。在经过适当培训和评价后,我们发现在语言和视觉中观察到的同样趋势,包括以模型大小为主的性能提升和通过微调迅速适应新游戏。我们比较了这种多游戏环境中的若干方法,例如在线和离线RL方法和行为克隆,发现我们的多加码决定变压器模型提供最佳的可缩放和性能。我们发布了预先培训的模型和代码,以鼓励在这一方向上进行进一步研究。更多的信息、视频和代码可以在网站看到:golegleglement.com/transtrans-trave-traveal