Most of the policy evaluation algorithms are based on the theories of Bellman Expectation and Optimality Equation, which derive two popular approaches - Policy Iteration (PI) and Value Iteration (VI). However, multi-step bootstrapping is often at cross-purposes with and off-policy learning in PI-based methods due to the large variance of multi-step off-policy correction. In contrast, VI-based methods are naturally off-policy but subject to one-step learning.In this paper, we deduce a novel multi-step Bellman Optimality Equation by utilizing a latent structure of multi-step bootstrapping with the optimal value function. Via this new equation, we derive a new multi-step value iteration method that converges to the optimal value function with exponential contraction rate $\mathcal{O}(\gamma^n)$ but only linear computational complexity. Moreover, it can naturally derive a suite of multi-step off-policy algorithms that can safely utilize data collected by arbitrary policies without correction.Experiments reveal that the proposed methods are reliable, easy to implement and achieve state-of-the-art performance on a series of standard benchmark datasets.
翻译:大部分政策评价算法都基于贝尔曼期望和最佳度均分理论,该理论得出了两种流行方法----政策迭代(PI)和价值迭代(VI)。然而,多步制接轨往往在以PI为基础的方法中与非政策学习交叉目的和脱政策学习。相反,基于六种方法自然地脱离政策,但需要一步地学习。在本文中,我们通过利用具有最佳价值功能的多步制靴接轨的潜在结构,推导出一套新的多步制贝尔曼最佳度均分法。通过这种新的方程式,我们得出一种新的多步制迭代法方法,与以指数收缩率$\macal{O}(gamma ⁇ n)$(gamma ⁇ n)$但只有线性计算复杂性的最佳值函数一致。此外,它自然可以得出一套多步制的离政策算法,可以安全地使用任意政策收集的数据而无需校正。 分析表明,拟议的方法可靠、易于执行和实现标准数据系列的状态。