Natural policy gradient (NPG) methods with function approximation achieve impressive empirical success in reinforcement learning problems with large state-action spaces. However, theoretical understanding of their convergence behaviors remains limited in the function approximation setting. In this paper, we perform a finite-time analysis of NPG with linear function approximation and softmax parameterization, and prove for the first time that widely used entropy regularization method, which encourages exploration, leads to linear convergence rate. Under considerably weaker regularity conditions, we prove that entropy-regularized Q-NPG variant with linear function approximation achieves $\tilde{O}(1/T)$ convergence rate. We adopt a Lyapunov drift analysis to prove the convergence results and explain the effectiveness of entropy regularization in improving the convergence rates.
翻译:具有功能近似值的自然政策梯度(NPG)方法在利用大型国家行动空间强化学习问题方面取得了令人印象深刻的成功经验。然而,在功能近似设置方面,对其趋同行为的理论理解仍然有限。在本文中,我们对NPG进行有线性函数近似值和软负负参数化的有限时间分析,并首次证明广泛使用的鼓励探索的英特罗比规范化方法导致线性趋同率。在相当弱的常规性条件下,我们证明具有线性函数近似值的英特罗比常规Q-NPG变方达到了 $\ tde{O}(1/T)$的趋同率。我们采用了Lyapunov 漂移分析,以证明趋同结果并解释英特罗比规范化在改善趋同率方面的有效性。