Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment. We apply our method on 36 Atari environments and show that we reach comparable performance to their SimPLe algorithm, while our model is significantly smaller.
翻译:样本效率仍然是强化学习的一个基本问题。基于模型的算法试图通过模拟环境模型来更好地利用数据。我们提议为基于矢量定量变异自动编码器(VQ-VAE)的世界模型建立一个新的神经网络架构,以编码观测和连动LSTM,以预测下一个嵌入指数。一个没有模型的PPO代理器纯粹根据来自世界模型的模拟经验进行培训。我们采用了Kaiser等人(202020年)的设置,这只能允许与实际环境进行100K的相互作用。我们在36个Atari环境中采用了我们的方法,并表明我们达到与SimPLe算法的可比性能,而我们的模型则小得多。