Motivated by applications in reinforcement learning (RL), we study a nonlinear stochastic approximation (SA) algorithm under Markovian noise, and establish its finite-sample convergence bounds under various stepsizes. Specifically, we show that when using constant stepsize (i.e., $\alpha_k\equiv \alpha$), the algorithm achieves exponential fast convergence to a neighborhood (with radius $O(\alpha\log(1/\alpha))$) around the desired limit point. When using diminishing stepsizes with appropriate decay rate, the algorithm converges with rate $O(\log(k)/k)$. Our proof is based on Lyapunov drift arguments, and to handle the Markovian noise, we exploit the fast mixing of the underlying Markov chain. To demonstrate the generality of our theoretical results on Markovian SA, we use it to derive the finite-sample bounds of the popular $Q$-learning with linear function approximation algorithm, under a condition on the behavior policy. Importantly, we do not need to make the assumption that the samples are i.i.d., and do not require an artificial projection step in the algorithm to maintain the boundedness of the iterates. Numerical simulations corroborate our theoretical results.
翻译:以强化学习( RL) 的应用为动力, 我们根据Markovian 噪声研究非线性随机近似(SA) 算法, 并在各种步骤下建立其有限样本趋同界限。 具体地说, 我们证明, 当使用恒定阶化( 即 $alpha_ k\ k\ equiv\ alpha$) 时, 算法( 半径 O( alpha\ log ( 1/\ alpha) ) ) 时, 算法在理想的极限点附近 实现指数性快速融合( 半径为$O( alpha\ log) 。 当使用与适当衰减率的递减阶梯度, 算法与 $O( log( k) ) / k) 相融合。 我们的证据基于 Lyapunov 漂浮度的参数, 并用于处理 Markov 基本的 链的快速混合 。 为了显示我们对 Markov SA 的理论结果的一般性, 我们用它来得出以直线函数接近算法的精度的精度的校准结果。 。