We study the global convergence of policy gradient for infinite-horizon, continuous state and action space, and entropy-regularized Markov decision processes (MDPs). We consider a softmax policy with (one-hidden layer) neural network approximation in a mean-field regime. Additional entropic regularization in the associated mean-field probability measure is added, and the corresponding gradient flow is studied in the 2-Wasserstein metric. We show that the objective function is increasing along the gradient flow. Further, we prove that if the regularization in terms of the mean-field measure is sufficient, the gradient flow converges exponentially fast to the unique stationary solution, which is the unique maximizer of the regularized MDP objective. Lastly, we study the sensitivity of the value function along the gradient flow with respect to regularization parameters and the initial condition. Our results rely on the careful analysis of the non-linear Fokker-Planck-Kolmogorov equation and extend the pioneering work of Mei et al. 2020 and Agarwal et al. 2020, which quantify the global convergence rate of policy gradient for entropy-regularized MDPs in the tabular setting.
翻译:我们研究了无限正偏、连续状态和动作空间以及正正态Markov决策程序的政策梯度的全球趋同性。 我们考虑在中野制度中采用软式政策,在中野制度中采用(一个隐藏层)神经网络近似值。 在相关的平均场概率测量中增加额外的梯度正规化,在2-Wasserstein指标中研究相应的梯度流。我们表明,在梯度流中,目标功能正在增加。此外,我们证明,如果按平均值计量的正规化方法足够充分,梯度流会迅速地汇合到独特的固定式解决方案,而这是正常的MDP目标的独特最大化。最后,我们研究了梯度流中值功能的敏感性,以正规化参数和初始条件为依据。我们的成果依赖于对非线性Fokker-Planck-Kolmogorov方程式的仔细分析,并扩展了Mei等人2020年和Agarwal等人2020年的开创性工作,后者量化了在表层中成正式正态MDPs的全球政策梯度趋联率的趋同率率率。