This paper revisits the temporal difference (TD) learning algorithm for the policy evaluation tasks in reinforcement learning. Typically, the performance of TD(0) and TD($\lambda$) is very sensitive to the choice of stepsizes. Oftentimes, TD(0) suffers from slow convergence. Motivated by the tight link between the TD(0) learning algorithm and the stochastic gradient methods, we develop a provably convergent adaptive projected variant of the TD(0) learning algorithm with linear function approximation that we term AdaTD(0). In contrast to the TD(0), AdaTD(0) is robust or less sensitive to the choice of stepsizes. Analytically, we establish that to reach an $\epsilon$ accuracy, the number of iterations needed is $\tilde{O}(\epsilon^{-2}\ln^4\frac{1}{\epsilon}/\ln^4\frac{1}{\rho})$ in the general case, where $\rho$ represents the speed of the underlying Markov chain converges to the stationary distribution. This implies that the iteration complexity of AdaTD(0) is no worse than that of TD(0) in the worst case. When the stochastic semi-gradients are sparse, we provide theoretical acceleration of AdaTD(0). Going beyond TD(0), we develop an adaptive variant of TD($\lambda$), which is referred to as AdaTD($\lambda$). Empirically, we evaluate the performance of AdaTD(0) and AdaTD($\lambda$) on several standard reinforcement learning tasks, which demonstrate the effectiveness of our new approaches.
翻译:本文重新审视了在强化学习中政策评估任务的时间差异( TD) 学习算法。 一般来说, TD( 0) 和 TD( $\ lambda$) 的性能对于职级的选择非常敏感。 通常, TD( 0) 和 TD( $\ lambda$ ) 的性能非常敏感。 通常, TD( 0) 学习算法和 STochact 梯度方法之间的紧密联系会影响。 我们开发了一种可辨认的适应性( 0) 学习算法的变量, 其直线函数近似( 10) 。 与 TD( 0) 相比, Ada( TD( 0) 和 TD( 美元) 阶梯值选择的速率比较强或不太敏感 。 这说明, TD( 0) 最高级的性能比 TD( TD) 的递增速度要快。