Quality-Diversity (QD) algorithms constitute a branch of optimization that is concerned with discovering a diverse and high-quality set of solutions to an optimization problem. Current QD methods commonly maintain diversity by dividing the behavior space into discrete regions, ensuring that solutions are distributed across different parts of the space. The QD problem is then solved by searching for the best solution in each region. This approach to QD optimization poses challenges in large solution spaces, where storing many solutions is impractical, and in high-dimensional behavior spaces, where discretization becomes ineffective due to the curse of dimensionality. We present an alternative framing of the QD problem, called \emph{Soft QD}, that sidesteps the need for discretizations. We validate this formulation by demonstrating its desirable properties, such as monotonicity, and by relating its limiting behavior to the widely used QD Score metric. Furthermore, we leverage it to derive a novel differentiable QD algorithm, \emph{Soft QD Using Approximated Diversity (SQUAD)}, and demonstrate empirically that it is competitive with current state of the art methods on standard benchmarks while offering better scalability to higher dimensional problems.
翻译:质量多样性(QD)算法构成了优化领域的一个分支,其核心目标在于为优化问题发现一组兼具高质量与多样性的解集。当前QD方法通常通过将行为空间划分为离散区域来维持多样性,确保解分布在空间的不同部分。随后,通过在每个区域内搜索最优解来解决QD问题。然而,这种QD优化方法在大规模解空间(存储大量解不切实际)和高维行为空间(因维度灾难导致离散化失效)中面临挑战。本文提出了一种替代性的QD问题框架,称为\\emph{软QD},它避免了离散化的需求。我们通过展示该框架的理想特性(如单调性)并将其极限行为与广泛使用的QD Score度量关联起来,验证了该公式的有效性。此外,我们利用该框架推导出一种新颖的可微分QD算法——\\emph{基于近似多样性的软QD(SQUAD)},并通过实验证明,该算法在标准基准测试中与当前最先进方法具有竞争力,同时在处理更高维度问题时展现出更好的可扩展性。