How to obtain good value estimation is one of the key problems in Reinforcement Learning (RL). Current value estimation methods, such as DDPG and TD3, suffer from unnecessary over- or underestimation bias. In this paper, we explore the potential of double actors, which has been neglected for a long time, for better value function estimation in continuous setting. First, we uncover and demonstrate the bias alleviation property of double actors by building double actors upon single critic and double critics to handle overestimation bias in DDPG and underestimation bias in TD3 respectively. Next, we interestingly find that double actors help improve the exploration ability of the agent. Finally, to mitigate the uncertainty of value estimate from double critics, we further propose to regularize the critic networks under double actors architecture, which gives rise to Double Actors Regularized Critics (DARC) algorithm. Extensive experimental results on challenging continuous control tasks show that DARC significantly outperforms state-of-the-art methods with higher sample efficiency.
翻译:如何获得良好的价值估算是强化学习(RL)中的一个关键问题。目前的价值估算方法,如DDPG和TD3, 都存在不必要的过高或低估偏差。在本文中,我们探讨了长期被忽略的双重行为者的潜力,以便在连续的环境下进行更好的价值估算。首先,我们发现并展示双重行为者的减轻偏见的特性,在单一的批评家和双重的批评家面前分别建立双重行为者,处理DDPG中的过高估计偏差和TD3中的低估偏差。接下来,我们饶有兴趣的是,我们发现双重行为者帮助提高了该代理人的勘探能力。最后,为了减轻双重批评家对价值估算的不确定性,我们进一步提议在双重行为者结构下规范批评者网络,从而产生双重行为者的正规化的激进主义(DAR)算法。关于持续控制任务挑战的广泛实验结果显示,DARC明显超越了样本效率更高的最新方法。