The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.
翻译:Q- 学习算法已知会受到最大化偏差的影响, 即系统性地高估行动值, 这是最近重新引起注意的一个重要问题。 提议将双Q- 学习作为一种有效的算法来减轻这种偏差。 但是, 这样做的代价是低估了行动值, 除了增加记忆要求和较慢的趋同。 在本文中, 我们引入了一种新的方法来解决最大化偏差, 其形式是“ 自我校正算法 ”, 以接近预期值的最大值。 我们的方法平衡了常规Q- 学习中使用的单一估计值的过高和低估了双重估计值作为减轻这种偏差的有效算法。 但是, 在自我校正 Q- 学习中, 将这一战略用于Q- 学习结果的低估值。 我们从理论上看, 这种新的算法在学习中享有与Q- 一样的一致保证, 同时更精确地, 它比在高差异奖励领域双Q- N- 的学习要好于在常规 Q- 学习中使用的双Q- 常规 Q- 上, 在深度 学习中, 和 深层次- D- 学习中, 等 位 学习中, 等 的 的 和 等 等 位调 的 等 等 等 调 调 调 调 调 调 调 调 调 调 调 调 等