Policy evaluation algorithms are essential to reinforcement learning due to their ability to predict the performance of a policy. However, there are two long-standing issues lying in this prediction problem that need to be tackled: off-policy stability and on-policy efficiency. The conventional temporal difference (TD) algorithm is known to perform very well in the on-policy setting, yet is not off-policy stable. On the other hand, the gradient TD and emphatic TD algorithms are off-policy stable, but are not on-policy efficient. This paper introduces novel algorithms that are both off-policy stable and on-policy efficient by using the oblique projection method. The empirical experimental results on various domains validate the effectiveness of the proposed approach.
翻译:政策评价算法对于加强学习至关重要,因为它们有能力预测政策绩效。然而,在这个预测问题中有两个长期存在的问题需要解决:政策外稳定性和政策上的效率。传统时间差(TD)算法在政策上表现良好,但并不脱离政策稳定。另一方面,梯度TD和强大的TD算法在政策上是稳定的,但在政策上却不是有效的。本文介绍了一些新颖的算法,这些算法在政策上是稳定的,在政策上也是高效的。在各个领域的经验实验结果证实了拟议方法的有效性。