This paper develops an unified framework to study finite-sample convergence guarantees of a large class of value-based asynchronous Reinforcement Learning (RL) algorithms. We do this by first reformulating the RL algorithms as Markovian Stochastic Approximation (SA) algorithms to solve fixed-point equations. We then develop a Lyapunov analysis and derive mean-square error bounds on the convergence of the Markovian SA. Based on this central result, we establish finite-sample mean-square convergence bounds for asynchronous RL algorithms such as $Q$-learning, $n$-step TD, TD$(\lambda)$, and off-policy TD algorithms including V-trace. As a by-product, by analyzing the performance bounds of the TD$(\lambda)$ (and $n$-step TD) algorithm for general $\lambda$ (and $n$), we demonstrate a bias-variance trade-off, i.e., efficiency of bootstrapping in RL. This was first posed as an open problem in [37].
翻译:本文开发了一个统一框架,用于研究基于价值的大规模非同步强化学习算法(RL)的有限抽样趋同保证。 我们首先将RL算法重新改写为Markovian Stochastatic Apporomimation(SA)算法,以解决固定点方程。 然后我们开发了一个Lyapunov分析,并得出与Markovian SA 趋同有关的中度差错界限。 基于这一中心结果,我们为非同步RL算法(例如$-learning, $-lein-sten TD, TD$ (\lambda)) 美元, 以及包括V-traces的退出政策的TD算法(SA) 。 作为副产品,我们分析了 TD$ (\lambda) $ (和$n-pen- TD) 通用算法的性能界限。基于这一核心结果,我们首次展示了在R7 中打开靴的偏差交易、i.e. 效率。