Improving the sample efficiency of reinforcement learning algorithms requires effective exploration. Following the principle of $\textit{optimism in the face of uncertainty}$ (OFU), we train a separate exploration policy to maximize the approximate upper confidence bound of the critics in an off-policy actor-critic framework. However, this introduces extra differences between the replay buffer and the target policy regarding their stationary state-action distributions. To mitigate the off-policy-ness, we adapt the recently introduced DICE framework to learn a distribution correction ratio for off-policy RL training. In particular, we correct the training distribution for both policies and critics. Empirically, we evaluate our proposed method in several challenging continuous control tasks and show superior performance compared to state-of-the-art methods. We also conduct extensive ablation studies to demonstrate the effectiveness and rationality of the proposed method.
翻译:提高强化学习算法的抽样效率需要进行有效的探索。根据在面临不确定性的情况下的 $\ textit{optimism (OFU) 原则,我们培训了一项单独的探索政策,以最大限度地提高批评者对非政策行为者-批评框架的高度信任程度。然而,这在重播缓冲与固定状态行动分布的目标政策之间造成了额外的差异。为了减轻政策外的敏感性,我们调整了最近推出的DICE框架,以了解非政策RL培训的分布校正比率。特别是,我们纠正了政策和批评者的培训分配。我们很生动地评估了我们提出的方法,在几项持续控制任务中提出了挑战,并展示了优于最新方法的绩效。我们还进行了广泛的调整研究,以证明拟议方法的有效性和合理性。