Experience replay (ER) improves the data efficiency of off-policy reinforcement learning (RL) algorithms by allowing an agent to store and reuse its past experiences in a replay buffer. While many techniques have been proposed to enhance ER by biasing how experiences are sampled from the buffer, thus far they have not considered strategies for refreshing experiences inside the buffer. In this work, we introduce Lucid Dreaming for Experience Replay (LiDER), a conceptually new framework that allows replay experiences to be refreshed by leveraging the agent's current policy. LiDER consists of three steps: First, LiDER moves an agent back to a past state. Second, from that state, LiDER then lets the agent execute a sequence of actions by following its current policy -- as if the agent were "dreaming" about the past and can try out different behaviors to encounter new experiences in the dream. Third, LiDER stores and reuses the new experience if it turned out better than what the agent previously experienced, i.e., to refresh its memories. LiDER is designed to be easily incorporated into off-policy, multi-worker RL algorithms that use ER; we present in this work a case study of applying LiDER to an actor-critic based algorithm. Results show LiDER consistently improves performance over the baseline in six Atari 2600 games. Our open-source implementation of LiDER and the data used to generate all plots in this work are available at github.com/duyunshu/lucid-dreaming-for-exp-replay.
翻译:经验重现( ER) 提高了政策外强化学习( RL) 算法的数据效率 。 LiDER 包括三个步骤: 首先, liDER 将一个代理商转回到过去状态。 其次, 利DER 然后让该代理商执行一系列行动, 遵循其当前政策, 比如说, 代理商对过去“ 梦想” 还没有考虑过在缓冲内更新经验的战略 。 在这项工作中, 我们引入了一个概念性的新框架, 通过利用该代理商的现行政策, 来更新重现经验。 利DER 包括三个步骤: 首先, 利DER 将一个代理商转回到过去状态。 其次, 利DER 然后让该代理商执行一系列行动, 遵循其当前政策, 比如说, 代理商对过去“ 梦想” 进行“ 思考”, 并且可以尝试不同的行为去在梦中体验。 第三, 利DER 商店和再利用新经验, 如果它比代理商以往的经验更好, 即, 更新它的记忆。 。 利DER 设计, 在目前基于 IM IMER 的运行 的运行中, 运行中, 应用一个基于 RLELLLLLLAL 的运行 的运行 的运行 的运行 。