We study the global convergence of policy gradient for infinite-horizon, continuous state and action space, entropy-regularized Markov decision processes (MDPs). We consider a softmax policy with (one-hidden layer) neural network approximation in a mean-field regime. Additional entropic regularization in the associated mean-field probability measure is added, and the corresponding gradient flow is studied in the 2-Wasserstein metric. We show that the objective function is increasing along the gradient flow. Further, we prove that if the regularization in terms of the mean-field measure is sufficient, the gradient flow converges exponentially fast to the unique stationary solution, which is the unique maximizer of the regularized MDP objective. Lastly, we study the sensitivity of the value function along the gradient flow with respect to regularization parameters and the initial condition. Our results rely on the careful analysis of non-linear Fokker--Planck--Kolmogorov equation and extend the pioneering work of Mei et al. 2020 and Agarwal et al. 2020, which quantify the global convergence rate of policy gradient for entropy-regularized MDPs in the tabular setting.
翻译:我们研究了无穷、连续状态和动作空间的政策梯度的全球趋同性。我们研究的是无穷、连续状态和动作空间的政策梯度的全球趋同性。我们考虑在中野制度中采用软式模子政策,在中野制度中采用(一个隐藏层)神经网络近似值。在相关的中野概率度测量中增加额外的梯度正规化,在2-Wasserstein指标中研究相应的梯度流。我们显示,在梯度流中,目标功能正在增加。此外,我们证明,如果按平均面积测量实现的正规化是足够的,梯度流会迅速指数化地汇合到独特的固定式解决方案,这是正常的MDP目标的独特最大化。最后,我们研究了沿梯度流的数值功能在正规化参数和初始条件方面的敏感性。我们的结果依赖于对非线性Fokker-Planck-Kolmogorov方程的仔细分析,并扩展了Mei等人2020年和Agarwal等人2020年的开创性工作,后者量化了在表格设置中成正正正正正式MDPs的全球政策梯度指数的趋同的全球趋同率的趋同率率。