Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games. Finally, we show that SEER is useful for computation-efficient transfer learning in RL because lower layers of CNNs extract generalizable features, which can be used for different tasks and domains.
翻译:在政策外深层强化学习(RL)的近期进展使视觉观测的复杂任务取得了令人印象深刻的成功。经验重播通过重新利用过去的经验提高了样本效率,而进化神经网络(CNNs)则有效地利用了高维输入过程。然而,这些技术需要高的内存和计算带宽。在本文中,我们展示了高效强化学习(SEER)的存储嵌入器,这是对现有脱政策RL方法的简单修改,以满足这些计算和记忆要求。为了减少CNN的梯度更新的计算间接费用,我们冻结了CNN编码器在早期培训中的低层,因为其参数的早期趋同。此外,我们通过储存低维潜伏载体重放经验而不是高维图像来减少记忆要求,从而能够适应性地增加回放缓冲能力,这是限制模范环境中的一种有用的技术。我们实验显示,SERER没有降低R代理器的性能,同时大大节省了深度控制环境和Atary游戏的计算和记忆。最后,我们展示了SECR的升级功能,因为SAR的升级是用来进行不同层次的计算。