Traditionally, Deep Artificial Neural Networks (DNN's) are trained through gradient descent. Recent research shows that Deep Neuroevolution (DNE) is also capable of evolving multi-million-parameter DNN's, which proved to be particularly useful in the field of Reinforcement Learning (RL). This is mainly due to its excellent scalability and simplicity compared to the traditional MDP-based RL methods. So far, DNE has only been applied to complex single-agent problems. As evolutionary methods are a natural choice for multi-agent problems, the question arises whether DNE can also be applied in a complex multi-agent setting. In this paper, we describe and validate a new approach based on Coevolution. To validate our approach, we benchmark two Deep Coevolutionary Algorithms on a range of multi-agent Atari games and compare our results against the results of Ape-X DQN. Our results show that these Deep Coevolutionary algorithms (1) can be successfully trained to play various games, (2) outperform Ape-X DQN in some of them, and therefore (3) show that Coevolution can be a viable approach to solving complex multi-agent decision-making problems.
翻译:最近的研究表明,深神经进化(DNE)也能够演化出数百万个参数的DNN,这在加强学习领域特别有用。这主要是因为它与传统的以MDP为基础的RL方法相比,具有极好的可缩缩和简单性。到目前为止,DNE只应用于复杂的单一试剂问题。由于进化方法是多试剂问题的自然选择,因此产生了这样一个问题:DNE是否也可以在复杂的多试剂环境下应用。在本文中,我们描述并验证以共变为基础的新方法。为了验证我们的方法,我们把两部深共变法定在一系列多试Atari游戏上,并将我们的结果与Ape-X DQN的结果进行比较。我们的结果显示,这些深共变算算法(1)可以成功地被训练来玩各种游戏,(2)在其中的某些游戏中超越Ape-X QN,因此显示,在复杂的决策中,Colevoria可以是可行的办法。