The deep reinforcement learning (DRL) algorithm works brilliantly on solving various complex control tasks. This phenomenal success can be partly attributed to DRL encouraging intelligent agents to sufficiently explore the environment and collect diverse experiences during the agent training process. Therefore, exploration plays a significant role in accessing an optimal policy for DRL. Despite recent works making great progress in continuous control tasks, exploration in these tasks has remained insufficiently investigated. To explicitly encourage exploration in continuous control tasks, we propose CCEP (Centralized Cooperative Exploration Policy), which utilizes underestimation and overestimation of value functions to maintain the capacity of exploration. CCEP first keeps two value functions initialized with different parameters, and generates diverse policies with multiple exploration styles from a pair of value functions. In addition, a centralized policy framework ensures that CCEP achieves message delivery between multiple policies, furthermore contributing to exploring the environment cooperatively. Extensive experimental results demonstrate that CCEP achieves higher exploration capacity. Empirical analysis shows diverse exploration styles in the learned policies by CCEP, reaping benefits in more exploration regions. And this exploration capacity of CCEP ensures it outperforms the current state-of-the-art methods across multiple continuous control tasks shown in experiments.
翻译:深度强化学习(DRL)算法在解决各种复杂的控制任务方面非常出色,这一惊人的成功可部分归功于DRL鼓励智能剂充分探索环境,并在代理培训过程中收集各种经验,因此,勘探在获得最佳DRL政策方面起着重要作用。尽管最近的工作在连续控制任务方面取得了巨大进展,但这些任务的探索仍未得到充分调查。为明确鼓励在连续控制任务中进行勘探,我们提议CCCEP(中央合作勘探政策)利用低估和高估价值功能来维持勘探能力。CEP首先保留两个由不同参数启动的价值功能,并从一对价值功能中产生多种勘探风格的多样化政策。此外,一个集中的政策框架确保CEP在多个政策之间传递信息,进一步有助于合作探索环境。广泛的实验结果显示CEP在不断提高勘探能力。实证分析显示CEP在学习的政策中采用不同的勘探方式,在更多的勘探区域收获收益。CEPEC的这一探索能力确保它超越了当前在多个连续的实验中显示的状态。