We revisit the estimation bias in policy gradients for the discounted episodic Markov decision process (MDP) from Deep Reinforcement Learning (DRL) perspective. The objective is formulated theoretically as the expected returns discounted over the time horizon. One of the major policy gradient biases is the state distribution shift: the state distribution used to estimate the gradients differs from the theoretical formulation in that it does not take into account the discount factor. Existing discussion of the influence of this bias was limited to the tabular and softmax cases in the literature. Therefore, in this paper, we extend it to the DRL setting where the policy is parameterized and demonstrate how this bias can lead to suboptimal policies theoretically. We then discuss why the empirically inaccurate implementations with shifted state distribution can still be effective. We show that, despite such state distribution shift, the policy gradient estimation bias can be reduced in the following three ways: 1) a small learning rate; 2) an adaptive-learning-rate-based optimizer; and 3) KL regularization. Specifically, we show that a smaller learning rate, or, an adaptive learning rate, such as that used by Adam and RSMProp optimizers, makes the policy optimization robust to the bias. We further draw connections between optimizers and the optimization regularization to show that both the KL and the reverse KL regularization can significantly rectify this bias. Moreover, we provide extensive experiments on continuous control tasks to support our analysis. Our paper sheds light on how successful PG algorithms optimize policies in the DRL setting, and contributes insights into the practical issues in DRL.
翻译:我们从深度强化学习(DRL)的角度,重新审视了政策梯度在政策梯度方面的估计偏差。目标在理论上是作为预期的回报在时间跨度上折扣。主要的政策梯度偏差之一是国家分布的变化:用于估计梯度的国家分布与理论拟订方法不同,因为它没有考虑到折扣因素。关于这种偏差影响的现有讨论限于文献中的表单和软式模型。因此,在本文件中,我们将其扩大到DRL设置,其中将政策参数化,并展示这种偏差如何导致在理论上低于最佳政策。然后我们讨论为什么由于改变的州分布而执行经验不准确的情况仍然有效。我们表明,尽管国家分布的变化,用于估计梯度的国家分布与理论拟订方法不同,但没有考虑到折扣因素。关于这种偏差影响的现有讨论仅限于文献中的表单和软式优化支持。因此,我们展示了较低的学习率,或者我们适应学习率,例如,这种偏差政策如何在理论上引出亚达姆和RSMPL的精确度,从而使得我们不断优化的定型的定型的定型的定型调整,我们能够进一步优化的定型的KMPL政策。