We study the problem of policy evaluation with linear function approximation and present efficient and practical algorithms that come with strong optimality guarantees. We begin by proving lower bounds that establish baselines on both the deterministic error and stochastic error in this problem. In particular, we prove an oracle complexity lower bound on the deterministic error in an instance-dependent norm associated with the stationary distribution of the transition kernel, and use the local asymptotic minimax machinery to prove an instance-dependent lower bound on the stochastic error in the i.i.d. observation model. Existing algorithms fail to match at least one of these lower bounds: To illustrate, we analyze a variance-reduced variant of temporal difference learning, showing in particular that it fails to achieve the oracle complexity lower bound. To remedy this issue, we develop an accelerated, variance-reduced fast temporal difference algorithm (VRFTD) that simultaneously matches both lower bounds and attains a strong notion of instance-optimality. Finally, we extend the VRFTD algorithm to the setting with Markovian observations, and provide instance-dependent convergence results that match those in the i.i.d. setting up to a multiplicative factor that is proportional to the mixing time of the chain. Our theoretical guarantees of optimality are corroborated by numerical experiments.
翻译:我们用直线函数近似值来研究政策评价问题,并提出高效实用的算法,这些算法具有强有力的最佳性保证。我们首先证明较低的界限,在确定性错误和这一问题的随机错误上都建立了基线。特别是,我们证明在一个与过渡内核的固定分布相关的、取决于情况的规范中,确定性错误的奥克莱复杂程度较低。我们开发了一个与过渡内核的固定分布有关的快速时间差异加速计算法(VRFTD),该算法同时与较低界限相符,并在i.i.i.d.观察模型的随机误差上得到强烈的描述。现有的算法至少与这些较低界限之一不匹配:为了说明,我们分析了时间差异学习的不同变式,特别表明它未能达到临界复杂性的较低约束。为了纠正这一问题,我们开发了一种与较低界限相匹配的快速时间差异算法(VRFTD ), 最后,我们将VRFTD算法扩大到与Markovian观察的设置相匹配, 并且提供了一种基于实例的理论的趋同性的结果。我们提供了一种最接近的理论性级的精确的精确性级的精确性实验。