Double Q-learning is a classical method for reducing overestimation bias, which is caused by taking maximum estimated values in the Bellman operation. Its variants in the deep Q-learning paradigm have shown great promise in producing reliable value prediction and improving learning performance. However, as shown by prior work, double Q-learning is not fully unbiased and suffers from underestimation bias. In this paper, we show that such underestimation bias may lead to multiple non-optimal fixed points under an approximate Bellman operator. To address the concerns of converging to non-optimal stationary solutions, we propose a simple but effective approach as a partial fix for the underestimation bias in double Q-learning. This approach leverages an approximate dynamic programming to bound the target value. We extensively evaluate our proposed method in the Atari benchmark tasks and demonstrate its significant improvement over baseline algorithms.
翻译:双Q学习是减少高估偏差的典型方法,其原因是在贝尔曼行动中采用了最高估计值,其深Q学习模式的变异在提供可靠的价值预测和改善学习业绩方面显示出很大的希望,然而,如以往工作所示,双Q学习并不完全公正,而且有低估偏差。在本文中,我们表明这种低估偏差可能导致在接近贝尔曼的操作者的领导下出现多种非最佳固定点。为了解决对非最佳固定解决办法的趋同问题,我们提议一种简单而有效的方法,作为双重Q学习中低估偏差的部分固定方法。这种方法利用一种大致动态的方案拟订方法来约束目标价值。我们广泛评价了我们拟议的阿塔里基准任务方法,并表明其相对于基线算法的重大改进。