We propose the homotopic policy mirror descent (HPMD) method for solving discounted, infinite horizon MDPs with finite state and action space, and study its policy convergence. We report three properties that seem to be new in the literature of policy gradient methods: (1) The policy first converges linearly, then superlinearly with order $\gamma^{-2}$ to the set of optimal policies, after $\mathcal{O}(\log(1/\Delta^*))$ number of iterations, where $\Delta^*$ is defined via a gap quantity associated with the optimal state-action value function; (2) HPMD also exhibits last-iterate convergence, with the limiting policy corresponding exactly to the optimal policy with the maximal entropy for every state. No regularization is added to the optimization objective and hence the second observation arises solely as an algorithmic property of the homotopic policy gradient method. (3) For the stochastic HPMD method, we further demonstrate a better than $\mathcal{O}(|\mathcal{S}| |\mathcal{A}| / \epsilon^2)$ sample complexity for small optimality gap $\epsilon$, when assuming a generative model for policy evaluation.
翻译:我们建议采用同质政策反射回溯法(HPMD)解决具有有限状态和动作空间的折扣、无限地平地 MDP(HPMD), 并研究其政策趋同。 我们报告在政策梯度方法文献中似乎有三种新的属性:(1) 政策首先线性地趋同,然后以美元超线性价调,然后以美元=gamamacal{O}(log(1/\Delta ⁇ ))美元与一套最佳政策相趋同。 (3) 在Stochatic HPMD方法方面,我们进一步展示出一个优于美元=mathcal{S ⁇ mathcal{A_BAR_BAR_BAR_BAR_BAR__BAR_BAR__BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_ ==SIBRIFIFIFI====SIGIIIIIFII) AR_ AR_ AR_\_\_\_Q_\_\_\_\_ ============xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx