Policy gradient methods estimate the gradient of a policy objective solely based on either the likelihood ratio (LR) estimator or the reparameterization (RP) estimator for estimating gradients. Many policy gradient methods based on the LR estimator can be unified under the policy gradient theorem (Sutton et al., 2000). However, such a unifying theorem does not exist for policy gradient methods based on the RP estimator. Moreover, no existing method requires and uses both estimators beyond a trivial interpolation between them. In this paper, we provide a theoretical framework that unifies several existing policy gradient methods based on the RP estimator. Utilizing our framework, we introduce a novel strategy to compute the policy gradient that, for the first time, incorporates both the LR and RP estimators and can be unbiased only when both estimators are present. Based on this strategy, we develop a new on-policy algorithm called the Reward Policy Gradient algorithm, which is the first model-free policy gradient method to utilize reward gradients. Using an idealized environment, we show that policy gradient solely based on the RP estimator for rewards are biased even with true rewards whereas our combined estimator is not. Finally, we show that our method either performs comparably with or outperforms Proximal Policy Optimization -- an LR-based on-policy method -- on several continuous control tasks.
翻译:政策梯度方法仅根据概率比率(LR)估计政策目标的梯度。 政策梯度方法仅根据估计梯度的可能性估计值( LR) 估计值或重新校准( RP) 估计值( RP) 估计值来估计政策目标的梯度。 许多以 LR 估计值为基础的政策梯度方法可以按照政策梯度原则统一( Sutton 等人, 2000年)。 但是,对于以 RP 估计值为基础的政策梯度方法来说,并不存在这样的统一理论。 此外,任何现有方法都不需要并使用除它们之间微不足道的内推法以外的两个估计值。 在本文中,我们提供了一个理论框架来统一一些现有的政策梯度方法。 利用我们的框架,我们引入了一种新的战略梯度方法来计算政策梯度,第一次将LRP和 RP 估算值都纳入其中,只有在两个估计者都在场的情况下,才能做到不偏不倚。 基于这一战略,我们开发了一种称为“ 重新评价政策梯度梯度” 梯度算法,这是第一个以RP 为基础的政策梯度方法来利用奖赏梯度。, 我们用一个理想化 最终的奖分法 。