Quality-Diversity (QD) algorithms seek to discover diverse, high-performing solutions across a behavior space, contrasting with conventional optimization methods that target a single optimum. Adversarial problems present unique challenges for QD approaches, as the competing nature of opposing sides creates interdependencies that complicate the evolution process. Existing QD methods applied to such scenarios typically fix one side, constraining behavioral diversity. We present Generational Adversarial MAP-Elites (GAME), a coevolutionary QD algorithm that evolves both sides by alternating which side is evolved at each generation. By integrating a vision embedding model, our approach eliminates the need for domain-specific behavior descriptors and instead operates on video. We validate GAME across three distinct adversarial domains: a multi-agent battle game, a soft-robot wrestling environment, and a deck building game. Our experiments reveal several evolutionary phenomena, including arms-race-like dynamics, enhanced novelty through generational extinction, and the preservation of neutral mutations as crucial stepping stones toward the highest performance. While GAME successfully illuminates all adversarial problems, its capacity for truly open-ended discovery remains constrained by the finite nature of the underlying search spaces. These findings establish GAME's broad applicability while highlighting opportunities for future research into open-ended adversarial coevolution.
翻译:暂无翻译