This paper develops an unified framework to study finite-sample convergence guarantees of a large class of value-based asynchronous reinforcement learning (RL) algorithms. We do this by first reformulating the RL algorithms as \textit{Markovian Stochastic Approximation} (SA) algorithms to solve fixed-point equations. We then develop a Lyapunov analysis and derive mean-square error bounds on the convergence of the Markovian SA. Based on this result, we establish finite-sample mean-square convergence bounds for asynchronous RL algorithms such as $Q$-learning, $n$-step TD, TD$(\lambda)$, and off-policy TD algorithms including V-trace. As a by-product, by analyzing the convergence bounds of $n$-step TD and TD$(\lambda)$, we provide theoretical insights into the bias-variance trade-off, i.e., efficiency of bootstrapping in RL. This was first posed as an open problem in (Sutton, 1999).
翻译:本文开发了一个统一框架,用于研究基于价值的大规模非同步强化学习算法(RL)的有限抽样趋同保证。 我们首先将RL算法重新改写为\ textit{ Markovian Stopachastic Approcimation} (SA) 算法,以解决固定点方程。 然后我们开发了一个Lyapunov 分析,并得出关于Markovian SA 趋同的中位差错界限。 基于这一结果,我们为非同步RL 算法建立了有限抽样平均趋同界限,如Q-lein-learn-learning TD, $n-step TD, TD$ (\lambda) $, 以及包括V-traces在内的非政策性TD算法。 作为副产品,我们通过分析 $ ent-step TD (\lambda) $的趋同界限,我们从理论上理解了偏差交易,即RL 靴子的效益,这首先成为了1999年的公开问题(Suton,1999年)。