Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. However, theoretical understanding of these methods remains insufficient. Even in the episodic (time-inhomogeneous) tabular setting, the state-of-the-art theoretical result of policy-based method in \citet{shani2020optimistic} is only $\tilde{O}(\sqrt{S^2AH^4K})$ where $S$ is the number of states, $A$ is the number of actions, $H$ is the horizon, and $K$ is the number of episodes, and there is a $\sqrt{SH}$ gap compared with the information theoretic lower bound $\tilde{\Omega}(\sqrt{SAH^3K})$. To bridge such a gap, we propose a novel algorithm Reference-based Policy Optimization with Stable at Any Time guarantee (\algnameacro), which features the property "Stable at Any Time". We prove that our algorithm achieves $\tilde{O}(\sqrt{SAH^3K} + \sqrt{AH^4K})$ regret. When $S > H$, our algorithm is minimax optimal when ignoring logarithmic factors. To our best knowledge, RPO-SAT is the first computationally efficient, nearly minimax optimal policy-based algorithm for tabular RL.
翻译:政策优化方法是最广泛使用的强化学习(RL)算法类别之一。 然而,对于这些方法的理论理解仍然不够。 即使在( 时间- 无异) 列表设置中, 基于政策的方法在\ citet{shani2020optimatistit} 中的最新理论结果只是$\tilde{O}( sqrt{S ⁇ 2AH4K}) 美元, 美元是州数, 美元是行动的数量, 美元是地平线, 美元是事件的数量, 美元是事件的数量, 而且与基于政策的方法在较低约束 $\ tilde_Omega} (\qrt{Sah3K} ) 的信息相比, 最先进的算法基于参考的迷你政策优化, 我们的最佳算法在任何时间保证(\ anamecro) 时显示属性“ 时间表 ” 。 我们的算法是 美元=QQrqrqral{S&Qrqrr} 。