Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems. Recently, many model-based methods have been designed to address this issue, with learning in the imagination of a world model being one of the most prominent approaches. However, while virtually unlimited interaction with a simulated environment sounds appealing, the world model has to be accurate over extended periods of time. Motivated by the success of Transformers in sequence modeling tasks, we introduce IRIS, a data-efficient agent that learns in a world model composed of a discrete autoencoder and an autoregressive Transformer. With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human normalized score of 1.046, and outperforms humans on 10 out of 26 games, setting a new state of the art for methods without lookahead search. To foster future research on Transformers and world models for sample-efficient reinforcement learning, we release our code and models at https://github.com/eloialonso/iris.
翻译:深层强化学习代理机构明显缺乏效率,严重限制了它们应用于现实世界的问题。最近,设计了许多基于模型的方法来解决这一问题,其中最突出的方法之一是以世界模型的想象力学习世界模型。然而,虽然与模拟环境的无限制互动听起来很吸引人,但世界模型必须长期准确。受变换者在序列建模任务中的成功推动,我们引入了IRIS,这是一个数据效率高的代理机构,在由离散自动编码器和自动递增变异器组成的世界模型中学习。在Atari 100k基准中,IRIS只实现了相当于两个小时的游戏游戏游戏,平均达到1.046分,在26场游戏中的10场中超越了人类,为不进行外观搜索的方法确定了新的艺术状态。为了促进未来对变换器和世界样本高效强化学习模型的研究,我们在https://github.com/ loialiononso/iris发布了我们的代码和模型。</s>