Greedy-GQ with linear function approximation, originally proposed in \cite{maei2010toward}, is a value-based off-policy algorithm for optimal control in reinforcement learning, and it has a non-linear two timescale structure with a non-convex objective function. This paper develops its finite-time error bounds. We show that the Greedy-GQ algorithm converges as fast as $\mathcal{O}({1}/{\sqrt{T}})$ under the i.i.d.\ setting and $\mathcal{O}({\log T}/{\sqrt{T}})$ under the Markovian setting. We further design a variant of the vanilla Greedy-GQ algorithm using the nested-loop approach, and show that its sample complexity is $\mathcal{O}({\log(1/\epsilon)\epsilon^{-2}})$, which matches with the one of the vanilla Greedy-GQ. Our finite-time error bounds match with one of the stochastic gradient descent algorithms for general smooth non-convex optimization problems. Our finite-sample analysis provides theoretical guidance on choosing step-sizes for faster convergence in practice and suggests the trade-off between the convergence rate and the quality of the obtained policy. Our techniques in this paper provide a general approach for finite-sample analysis of non-convex two timescale value-based reinforcement learning algorithms.
翻译:greedy- GQ 与线性函数近似, 最初在 i. d. 设置 和 $\ mathal{ O} (hlog T}/ sqrt{T}) 中提出, 是一种基于价值的非线性两个时间级结构, 用于优化学习, 在 Markovian 设置下, 它有一个非线性的两个时间级结构, 带有非convex 目标函数 。 此文件开发了它的有限时间错误界限 。 我们显示, greedy- GQ 算法与 $\ mathcal{O} ({{{{{{{{% 1} / / { sqrt{ t}} 相同, 是一个基于 i. d\ 设置 和 $\ mathalcalcalal{O} 的基于价值的基于值的基于值, 我们的固定时间级定时间级错误结合了 我们的精度- greal- sal- developal assal assal assal gradual assal assal graphal assal assalislation 分析中, 提供 和我们总级的不透明级分析中, 缩缩缩缩缩缩缩缩缩缩算法, 提供 提供 。