Evolutionary Algorithms (EAs) and Deep Reinforcement Learning (DRL) have recently been combined to integrate the advantages of the two solutions for better policy learning. However, in existing hybrid methods, EA is used to directly train the policy network, which will lead to sample inefficiency and unpredictable impact on the policy performance. To better integrate these two approaches and avoid the drawbacks caused by the introduction of EA, we devote ourselves to devising a more efficient and reasonable method of combining EA and DRL. In this paper, we propose Evolutionary Action Selection-Twin Delayed Deep Deterministic Policy Gradient (EAS-TD3), a novel combination of EA and DRL. In EAS, we focus on optimizing the action chosen by the policy network and attempt to obtain high-quality actions to guide policy learning through an evolutionary algorithm. We conduct several experiments on challenging continuous control tasks. The result shows that EAS-TD3 shows superior performance over other state-of-art methods.
翻译:最近,将进化算法和深强化学习(DRL)结合起来,将两种解决办法的优点结合起来,以更好地进行政策学习,然而,在现有的混合方法中,EA直接用于培训政策网络,这将导致效率低下和对政策业绩产生不可预测的影响。为了更好地结合这两种办法,避免采用EA造成的弊端,我们致力于设计一种更有效和合理的方法,将EA和DRL结合起来。在本文件中,我们提议将进化行动选择-双延迟深确定性政策梯度(EAS-TD3)作为EA和DL的新组合。在EAS中,我们侧重于优化政策网络选择的行动,并试图通过演化算法获得高质量行动,指导政策学习。我们进行了几项挑战持续控制任务的研究。结果显示EAS-TD3显示,EAS-TD3的表现优于其他先进方法。