Quality Diversity (QD) has emerged as a powerful alternative optimization paradigm that aims at generating large and diverse collections of solutions, notably with its flagship algorithm MAP-ELITES (ME) which evolves solutions through mutations and crossovers. While very effective for some unstructured problems, early ME implementations relied exclusively on random search to evolve the population of solutions, rendering them notoriously sample-inefficient for high-dimensional problems, such as when evolving neural networks. Follow-up works considered exploiting gradient information to guide the search in order to address these shortcomings through techniques borrowed from either Black-Box Optimization (BBO) or Reinforcement Learning (RL). While mixing RL techniques with ME unlocked state-of-the-art performance for robotics control problems that require a good amount of exploration, it also plagued these ME variants with limitations common among RL algorithms that ME was free of, such as hyperparameter sensitivity, high stochasticity as well as training instability, including when the population size increases as some components are shared across the population in recent approaches. Furthermore, existing approaches mixing ME with RL tend to be tied to a specific RL algorithm, which effectively prevents their use on problems where the corresponding RL algorithm fails. To address these shortcomings, we introduce a flexible framework that allows the use of any RL algorithm and alleviates the aforementioned limitations by evolving populations of agents (whose definition include hyperparameters and all learnable parameters) instead of just policies. We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems, some of which with deceptive rewards, taken from the QD-RL literature.
翻译:优质多样性(Quality Diversity, QD)已经成为一种强大的替代优化范式,旨在通过其旗舰算法 MAP-Elites(ME)生成大而多样化的解决方案集合,该算法通过变异和交叉进化解决方案。尽管非常有效用于某些无结构问题,但早期的 ME 实现完全依赖于随机搜索来进化解决方案群体,在高维问题上效率十分低下,例如在进化神经网络时。后续工作则考虑利用梯度信息来引导搜索,以通过借鉴黑盒优化 (Black-Box Optimization, BBO) 或强化学习 (Reinforcement Learning, RL) 的技术来解决这些缺点。虽然将 RL 技术与 ME 结合使得机器人控制问题取得了最先进的性能,但它也启示了这些 ME 变体的限制——与 ME 稳定性相比,RL 算法常见的限制如超参数敏感性、高随机性以及训练不稳定性会变得更加突出,尤其在最近的方法中,当群体规模增加时,一些组件由群体共享。此外,现有的将 ME 与 RL 结合的方法往往被绑定到特定的 RL 算法上,这实际上阻止了它们在相应的 RL 算法无法成功运行的问题上的使用。为了解决这些问题,我们介绍了一个灵活的框架,可以使用任何RL算法,并通过进化代理人群体(其定义包含超参数和所有可学习参数)而不仅是策略,来减轻上述限制。我们通过在从 QD-RL 文献中选取的一些机器人控制问题上进行广泛的数值实验来展示我们的框架带来的优势,其中一些问题具有欺骗性的奖励。