Cooperative multi-agent reinforcement learning (MARL) requires agents to explore to learn to cooperate. Existing value-based MARL algorithms commonly rely on random exploration, such as $\epsilon$-greedy, which is inefficient in discovering multi-agent cooperation. Additionally, the environment in MARL appears non-stationary to any individual agent due to the simultaneous training of other agents, leading to highly variant and thus unstable optimisation signals. In this work, we propose ensemble value functions for multi-agent exploration (EMAX), a general framework to extend any value-based MARL algorithm. EMAX trains ensembles of value functions for each agent to address the key challenges of exploration and non-stationarity: (1) The uncertainty of value estimates across the ensemble is used in a UCB policy to guide the exploration of agents to parts of the environment which require cooperation. (2) Average value estimates across the ensemble serve as target values. These targets exhibit lower variance compared to commonly applied target networks and we show that they lead to more stable gradients during the optimisation. We instantiate three value-based MARL algorithms with EMAX, independent DQN, VDN and QMIX, and evaluate them in 21 tasks across four environments. Using ensembles of five value functions, EMAX improves sample efficiency and final evaluation returns of these algorithms by 53%, 36%, and 498%, respectively, averaged all 21 tasks.
翻译:合作多智能体强化学习需要智能体探索以学习合作。现有基于值的多智能体强化学习算法通常依赖于随机探索,如 $\epsilon$-贪心,这在发现多智能体合作方面效率低下。此外,由于其他智能体的同时训练,多智能体强化学习环境对任何单个智能体来说都是非静态的,导致优化信号高度变化从而不稳定。在本文中,我们提出了集成值函数用于多智能体探索(EMAX),它是扩展任何基于值的多智能体强化学习算法的通用框架。EMAX 为每个智能体训练一组值函数来解决探索和非静态性的关键挑战:(1)利用集合中价值估计的不确定性,使用UCB策略来指导智能体探索需要合作的环境部分。(2)展示了集合中的平均值估计作为目标值。这些目标相比常用的目标网络表现出更低的方差,并且我们证明它们在优化过程中导致更稳定的梯度。我们使用EMAX对三种基于值的多智能体强化学习算法进行实例化,包括独立DQN、VDN和QMIX,并在四种环境的21个任务中进行了评估。在所有21个任务上平均,使用五个值函数的集合,EMAX将这些算法的样本效率和最终评估回报分别提高了53%,36%和498%。