In this paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques. Specifically, we propose a Bregman gradient policy optimization (BGPO) algorithm based on the basic momentum technique and mirror descent iteration. At the same time, we present an accelerated Bregman gradient policy optimization (VR-BGPO) algorithm based on a momentum variance-reduced technique. Moreover, we introduce a convergence analysis framework for our Bregman gradient policy optimization under the nonconvex setting. Specifically, we prove that BGPO achieves the sample complexity of $\tilde{O}(\epsilon^{-4})$ for finding $\epsilon$-stationary point only requiring one trajectory at each iteration, and VR-BGPO reaches the best known sample complexity of $\tilde{O}(\epsilon^{-3})$ for finding an $\epsilon$-stationary point, which also only requires one trajectory at each iteration. In particular, by using different Bregman divergences, our methods unify many existing policy optimization algorithms and their new variants such as the existing (variance-reduced) policy gradient algorithms and (variance-reduced) natural policy gradient algorithms. Extensive experimental results on multiple reinforcement learning tasks demonstrate the efficiency of our new algorithms.
翻译:在本文中,我们设计了一个基于布雷格曼差异和动力技术的新型布雷格曼梯度政策优化框架,用于强化学习。具体地说,我们提议基于基本动力技术的布雷格曼梯度政策优化(BGPO)算法(BGPO)和反向下向迭代法。同时,我们提出基于动力差异减慢技术的加速布雷格曼梯度政策优化(VR-BGPO)算法(VR-BGPO)。此外,我们为在非康韦克斯设置下找到布雷格曼梯度政策优化引入一个趋同分析框架。具体地说,我们证明英国格曼梯度政策优化(Emplon=O}(\\\\\\\\\\\4}4})的样本复杂度(BGPO)的精度。具体地说,通过使用不同的布雷格纳(Bregr)政策递增级法,我们现有的政策递增性变异性(我们的现行政策变异性)的演化方法,将现行政策演化方法统一。