Bootstrapping is a core mechanism in Reinforcement Learning (RL). Most algorithms, based on temporal differences, replace the true value of a transiting state by their current estimate of this value. Yet, another estimate could be leveraged to bootstrap RL: the current policy. Our core contribution stands in a very simple idea: adding the scaled log-policy to the immediate reward. We show that slightly modifying Deep Q-Network (DQN) in that way provides an agent that is competitive with distributional methods on Atari games, without making use of distributional RL, n-step returns or prioritized replay. To demonstrate the versatility of this idea, we also use it together with an Implicit Quantile Network (IQN). The resulting agent outperforms Rainbow on Atari, installing a new State of the Art with very little modifications to the original algorithm. To add to this empirical study, we provide strong theoretical insights on what happens under the hood -- implicit Kullback-Leibler regularization and increase of the action-gap.
翻译:强化学习( RL) 中的核心机制。 多数算法基于时间差异, 以当前对这个值的估计取代了中转状态的真实值。 然而, 另一种估计可以被套用到“ RL ” 上: 现行政策。 我们的核心贡献是一个非常简单的想法: 将缩放的日志政策添加到直接的奖励中。 我们显示, 稍稍修改深Q- Network (DQN) 的方式提供了一种在Atari 游戏上与分配方法竞争的代理, 而不使用分布式 RL、 n 步返回或优先重放。 为了展示这个概念的多功能性, 我们还与隐含的量子网络( IQN) 一起使用它。 由此产生的代理在 Atari 上超越彩虹, 安装一个新的艺术状态, 对原始算法几乎没有多少修改 。 在这项实验研究中, 我们为在头罩下发生什么 -- 隐含 Kullack- Leeper 常规化和增加动作格 。