We propose the homotopic policy mirror descent (HPMD) method for solving discounted, infinite horizon MDPs with finite state and action space, and study its convergence properties. We report several findings that seem to be new in the literature of policy gradient methods: (1) HPMD exhibits global linear convergence of the value optimality gap, and local superlinear convergence of both the policy and optimality gap with order $\gamma^{-2}$. The superlinear convergence takes effect after no more than $\mathcal{O}(\log(1/\Delta^*))$ number of iterations, where $\Delta^*$ is defined via a gap quantity associated with the optimal state-action value function; (2) HPMD also exhibits last-iterate convergence of the policy, with the limiting policy corresponding exactly to the optimal policy with the maximal entropy for every state. No regularization is added to the optimization objective and hence the second observation arises solely as an algorithmic property of the homotopic policy gradient method; (3) The last-iterate convergence of HPMD holds for a much broader class of decomposable distance-generating functions, including the $p$-th power of $\ell_p$-norm and the negative Tsallis entropy. As a byproduct of the analysis, we also discover the finite-time exact convergence of HPMD with these divergences, and show that HPMD continues converging to the limiting policy even if the current policy is already optimal; (4) For the stochastic HPMD method, we further demonstrate that a better than $\tilde{\mathcal{O}}(|\mathcal{S}| |\mathcal{A}| / \epsilon^2)$ sample complexity for small optimality gap $\epsilon$ holds with high probability, when assuming a generative model for policy evaluation.
翻译:我们建议使用同质政策镜底底线(HPMD) 方法来解决具有有限状态和动作空间的折价、无限地平地 MDP,并研究其趋同性。我们报告了一些在政策梯度方法文献中似乎是新的发现:(1) HPMD 展示了价值最佳差距的全球线性趋同,以及政策和最佳性差的地方超线性趋同以$\gamma ⁇ -2美元为主。超级线性趋同在不超过$mathcal{O}(log(1/\Delta ⁇ )美元(美元)的迭代数之后生效,其中,$Delta ⁇ $是通过与最佳州-行动值值值值值值值值值相挂钩的缺口数量来定义的;(2) HPMDMD还展示了政策的最后程度趋同性趋同,而政策的最佳性趋同与每个州最优的螺旋酶。 没有正常化的目标,因此仅作为同性政策梯度的模型特性属性;(3) HPMDMD的最后一次趋同度趋同性评价在更宽得多的级别上,甚至可调价价价的汇率的政策趋同性(美元),也显示最接近的汇率的汇率的汇率的极差值。