While deep reinforcement learning has shown important empirical success, it tends to learn relatively slow due to slow propagation of rewards information and slow update of parametric neural networks. Non-parametric episodic memory, on the other hand, provides a faster learning alternative that does not require representation learning and uses maximum episodic return as state-action values for action selection. Episodic memory and reinforcement learning both have their own strengths and weaknesses. Notably, humans can leverage multiple memory systems concurrently during learning and benefit from all of them. In this work, we propose a method called Two-Memory reinforcement learning agent (2M) that combines episodic memory and reinforcement learning to distill both of their strengths. The 2M agent exploits the speed of the episodic memory part and the optimality and the generalization capacity of the reinforcement learning part to complement each other. Our experiments demonstrate that the 2M agent is more data efficient and outperforms both pure episodic memory and pure reinforcement learning, as well as a state-of-the-art memory-augmented RL agent. Moreover, the proposed approach provides a general framework that can be used to combine any episodic memory agent with other off-policy reinforcement learning algorithms.
翻译:虽然深度强化学习已经取得了重要的经验成功,但由于奖励信息的传递速度较慢和参数化神经网络的更新速度较慢,它的学习速度较慢。另一方面,非参数化的情节记忆提供了一种更快速的学习替代方法,不需要表示学习,并使用最大情节回报作为状态-动作值进行动作选择。强化学习和情节记忆都有各自的优缺点。值得注意的是,人类可以在学习过程中同时利用多个记忆系统,并从中受益。在这项工作中,我们提出了一种称为双重记忆强化学习代理(2M)的方法,它将情节记忆和强化学习相结合,以提取它们的优势。 2M代理利用情节记忆部分的速度和强化学习部分的最优性和泛化能力相互补充。我们的实验表明,2M代理比纯情节记忆、纯强化学习以及最先进的存储增强RL代理更具数据效率和性能。此外,所提出的方法提供了一种可以将任何情节记忆代理与其他脱机强化学习算法相结合的通用框架。