Proximal policy optimization (PPO) has yielded state-of-the-art results in policy search, a subfield of reinforcement learning, with one of its key points being the use of a surrogate objective function to restrict the step size at each policy update. Although such restriction is helpful, the algorithm still suffers from performance instability and optimization inefficiency from the sudden flattening of the curve. To address this issue we present a PPO variant, named Proximal Policy Optimization Smooth Algorithm (PPOS), and its critical improvement is the use of a functional clipping method instead of a flat clipping method. We compare our method with PPO and PPORB, which adopts a rollback clipping method, and prove that our method can conduct more accurate updates at each time step than other PPO methods. Moreover, we show that it outperforms the latest PPO variants on both performance and stability in challenging continuous control tasks.
翻译:Proximal政策优化(PPO)在政策搜索(强化学习的子领域)中产生了最先进的成果,这是强化学习的子领域,其关键要点之一是使用替代目标功能限制每项政策更新的职级大小。虽然这种限制很有帮助,但算法仍然由于曲线突然平滑而出现性能不稳定和效率低下。为了解决这一问题,我们提出了一个名为Proximal政策优化优化平流阿尔高利氏(PPOS)的PPPO变方,其关键的改进是使用功能剪切方法,而不是平板剪切方法。我们比较了我们的方法与PPPO和PPOPRB,后者采用了滚动剪切切方法,并证明我们的方法比其他PPO方法每步都能够进行更准确的最新更新。此外,我们显示它比最新的PPO变方在挑战持续控制任务中的绩效和稳定性都高。