While policy-based reinforcement learning (RL) achieves tremendous successes in practice, it is significantly less understood in theory, especially compared with value-based RL. In particular, it remains elusive how to design a provably efficient policy optimization algorithm that incorporates exploration. To bridge such a gap, this paper proposes an \underline{O}ptimistic variant of the \underline{P}roximal \underline{P}olicy \underline{O}ptimization algorithm (OPPO), which follows an "optimistic version" of the policy gradient direction. This paper proves that, in the problem of episodic Markov decision process with unknown transition and full-information feedback of adversarial reward, OPPO achieves an $\tilde{O}(\sqrt{|\mathcal{S}|^2|\mathcal{A}|H^3 T})$ regret. Here, $|\mathcal{S}|$ is the size of the state space, $|\mathcal{A}|$ is the size of the action space, $H$ is the episode horizon, and $T$ is the total number of steps. To the best of our knowledge, OPPO is the first provably efficient~policy optimization algorithm that explores.
翻译:虽然基于政策的强化学习在实践中取得了巨大的成功,但在理论上,特别是在与基于价值的学习相比,这一学习在理论上远不为人所理解。 特别是,如何设计一个包含勘探的、可实现的高效政策优化算法仍然难以找到。 为了缩小这一差距,本文件建议了一种基于以下的变量: 底线{P}roximal {P}roximitimical{P}underline{P}policy {O}prideline{O}pimimization 运算法(OPPPO),它遵循的是政策梯度方向的“乐观版本 ” 。 这份文件证明, 在Sindodic Markovov 决策过程中, 存在未知的过渡和对抗性奖赏的全面信息反馈问题, OPPOPO 实现了 $\\ mathcal{Sqlation{O} (sqourmacal pressionalalal), $H$是我们最高效的策略的总数。