Evolution Strategy (ES) is a powerful black-box optimization technique based on the idea of natural evolution. In each of its iterations, a key step entails ranking candidate solutions based on some fitness score. For an ES method in Reinforcement Learning (RL), this ranking step requires evaluating multiple policies. This is presently done via on-policy approaches: each policy's score is estimated by interacting several times with the environment using that policy. This leads to a lot of wasteful interactions since, once the ranking is done, only the data associated with the top-ranked policies is used for subsequent learning. To improve sample efficiency, we propose a novel off-policy alternative for ranking, based on a local approximation for the fitness function. We demonstrate our idea in the context of a state-of-the-art ES method called the Augmented Random Search (ARS). Simulations in MuJoCo tasks show that, compared to the original ARS, our off-policy variant has similar running times for reaching reward thresholds but needs only around 70% as much data. It also outperforms the recent Trust Region ES. We believe our ideas should be extendable to other ES methods as well.
翻译:进化策略( ES) 是一种基于自然进化理念的强大黑盒优化技术。 在每一个迭代中, 关键步骤都包含基于某些健身分的排名候选解决方案。 对于强化学习的ES 方法( RL), 这一排名步骤需要评估多种政策。 目前, 是通过政策性方法进行的: 每项政策的得分都是通过使用该政策与环境进行多次互动来估算的。 这导致大量浪费性互动, 因为排名完成后, 只有与最高等级政策相关的数据才能用于随后的学习。 为了提高抽样效率, 我们提出一个新的非政策性排名替代方案, 其依据是健身功能的本地近似值。 我们用最先进的ES 方法( ARS) 来展示我们的想法。 Mujoco 任务模拟显示, 与原始的ARS 相比, 我们的离政策变式在达到奖励阈值方面有着相似的运行时间, 但只需要大约70 %的数据 。 它也比最近的 Trust区域 ES 。 我们相信, 我们的想法应该可以推广到其他ES 。