Policy gradient algorithms have proven to be successful in diverse decision making and control tasks. However, these methods suffer from high sample complexity and instability issues. In this paper, we address these challenges by providing a different approach for training the critic in the actor-critic framework. Our work builds on recent studies indicating that traditional actor-critic algorithms do not succeed in fitting the true value function, calling for the need to identify a better objective for the critic. In our method, the critic uses a new state-value (resp. state-action-value) function approximation that learns the value of the states (resp. state-action pairs) relative to their mean value rather than the absolute value as in conventional actor-critic. We prove the theoretical consistency of the new gradient estimator and observe dramatic empirical improvement across a variety of continuous control tasks and algorithms. Furthermore, we validate our method in tasks with sparse rewards, where we provide experimental evidence and theoretical insights.
翻译:事实证明,政策梯度算法在不同的决策和控制任务中是成功的。然而,这些方法具有高度的样本复杂性和不稳定性问题。在本文中,我们通过提供一种不同的方法来应对这些挑战,在行为者-批评框架中对批评者进行培训。我们的工作以最近的研究为基础,这些研究显示,传统的行为者-批评算法不能成功地适应真正的价值功能,要求需要为批评者确定更好的目标。在我们的方法中,批评者使用一种新的国家价值(州-行动-价值)函数近似法,了解国家(州-行动对)相对于其平均价值的价值,而不是传统的行为者-批评的绝对价值。我们证明了新的梯度估计法的理论一致性,并观察了各种连续控制任务和算法的巨大经验改进。此外,我们用稀有的奖励来验证我们的任务的方法,我们提供了实验证据和理论洞察力。