Popular off-policy deep reinforcement learning algorithms compensate for overestimation bias during temporal-difference learning by utilizing pessimistic estimates of the expected target returns. In this work, we propose a novel learnable penalty to enact such pessimism, based on a new way to quantify the critic's epistemic uncertainty. Furthermore, we propose to learn the penalty alongside the critic with dual TD-learning, a strategy to estimate and minimize the bias magnitude in the target returns. Our method enables us to accurately counteract overestimation bias throughout training without incurring the downsides of overly pessimistic targets. Empirically, by integrating our method and other orthogonal improvements with popular off-policy algorithms, we achieve state-of-the-art results in continuous control tasks from both proprioceptive and pixel observations.
翻译:流行的深层强化政策学习算法通过利用对预期目标回报的悲观估计,弥补了在时间差异学习中高估偏差。 在这项工作中,我们提出一种新的可学习惩罚办法,在对评论家的认知不确定性进行量化的新方法的基础上,颁布这种悲观主义。此外,我们提议与批评家一起学习双重TD学习,以估计和尽量减少目标回报的偏差程度的战略。我们的方法使我们能够准确抵消整个培训过程中高估偏差,而不会引起过于悲观的目标的下行。我们通过将我们的方法和其他正反的改进与流行的离政策算法相结合,实现从偏向式和像素观察的连续控制任务。