We study the problem of policy evaluation with linear function approximation and present efficient and practical algorithms that come with strong optimality guarantees. We begin by proving lower bounds that establish baselines on both the deterministic error and stochastic error in this problem. In particular, we prove an oracle complexity lower bound on the deterministic error in an instance-dependent norm associated with the stationary distribution of the transition kernel, and use the local asymptotic minimax machinery to prove an instance-dependent lower bound on the stochastic error in the i.i.d. observation model. Existing algorithms fail to match at least one of these lower bounds: To illustrate, we analyze a variance-reduced variant of temporal difference learning, showing in particular that it fails to achieve the oracle complexity lower bound. To remedy this issue, we develop an accelerated, variance-reduced fast temporal difference algorithm (VRFTD) that simultaneously matches both lower bounds and attains a strong notion of instance-optimality. Finally, we extend the VRFTD algorithm to the setting with Markovian observations, and provide instance-dependent convergence results. Our theoretical guarantees of optimality are corroborated by numerical experiments.
翻译:我们首先用直线函数近似值来研究政策评价问题,并提出具有强有力的最佳性保证的高效实用算法。我们首先证明在这一问题中,在确定性错误和随机错误上建立基线的下限;特别是,我们证明在一个与过渡内核的固定分布相关的、取决于情况的规范中,确定性错误的奥克莱复杂程度较低。我们利用当地无药可依的小算法机制来证明在i.i.d.观察模型中,根据实例判断性差错的较低约束度。现有的算法至少不能与这些较低范围之一相匹配:为说明,我们分析了时间差异差异减少的学习变量,特别表明它未能达到临界复杂性的较低约束。为了纠正这一问题,我们开发了一种快速的、差异减少的时间差异算法(VRFTD),该算法既与较低界限相匹配,又具有强烈的试想性最佳性概念。最后,我们将VRFTD算法的算法与Markovian观察的设置相匹配,并且提供了实例上最可靠的趋同性结果。