In this work, we consider policy-based methods for solving the reinforcement learning problem, and establish the sample complexity guarantees. A policy-based algorithm typically consists of an actor and a critic. We consider using various policy update rules for the actor, including the celebrated natural policy gradient. In contrast to the gradient ascent approach taken in the literature, we view natural policy gradient as an approximate way of implementing policy iteration, and show that natural policy gradient (without any regularization) enjoys geometric convergence when using increasing stepsizes. As for the critic, we consider using TD-learning with linear function approximation and off-policy sampling. Since it is well-known that in this setting TD-learning can be unstable, we propose a stable generic algorithm (including two specific algorithms: the $\lambda$-averaged $Q$-trace and the two-sided $Q$-trace) that uses multi-step return and generalized importance sampling factors, and provide the finite-sample analysis. Combining the geometric convergence of the actor with the finite-sample analysis of the critic, we establish for the first time an overall $\mathcal{O}(\epsilon^{-2})$ sample complexity for finding an optimal policy (up to a function approximation error) using policy-based methods under off-policy sampling and linear function approximation.
翻译:在这项工作中,我们考虑了解决强化学习问题的基于政策的方法,并建立了抽样复杂性保障。基于政策的算法通常由一位行为者和一位批评家组成。我们考虑对行为者使用各种政策更新规则,包括庆祝自然政策梯度。与文献中采用的梯度上升法相比,我们把自然政策梯度视为执行政策迭代的一种近似方法,并表明自然政策梯度(没有任何正规化)在使用越来越多的步骤时具有几何趋同性。关于评论家,我们考虑使用TD-学习与线性函数近似和非政策抽样分析相结合。由于众所周知,在设置TD-学习时可能不稳定,我们建议采用一种稳定的通用算法(包括两种具体的算法:美元/拉姆巴达平均美元和双向的美元基值),使用多步返回和普遍重要性取样因素,并提供有限的抽样分析。把行为者的几何对称结合到对评论家的定数性近似和非政策抽样分析结合起来,我们首次建立了一种基于 美元/马卡勒-直线性政策下的全面的抽样功能。