Natural policy gradient (NPG) methods with function approximation achieve impressive empirical success in reinforcement learning problems with large state-action spaces. However, theoretical understanding of their convergence behaviors remains limited in the function approximation setting. In this paper, we perform a finite-time analysis of NPG with linear function approximation and softmax parameterization, and prove for the first time that widely used entropy regularization method, which encourages exploration, leads to linear convergence rate. We adopt a Lyapunov drift analysis to prove the convergence results and explain the effectiveness of entropy regularization in improving the convergence rates.
翻译:具有功能近似值的自然政策梯度(NPG)方法在利用大型国家行动空间强化学习问题方面取得了令人印象深刻的成功经验,然而,在功能近似设置方面,对其趋同行为的理论理解仍然有限。 在本文中,我们对NPG进行有线性函数近似值和软负负参数化的有限时间分析,并首次证明广泛使用的鼓励探索的伦基整形法导致线性趋同率。我们采用了Lyapunov流学分析,以证明趋同结果,并解释在改善趋同率方面对导体正规化的有效性。