Deep reinforcement learning (RL) agents may successfully generalize to new settings if trained on an appropriately diverse set of environment and task configurations. Unsupervised Environment Design (UED) is a promising self-supervised RL paradigm, wherein the free parameters of an underspecified environment are automatically adapted during training to the agent's capabilities, leading to the emergence of diverse training environments. Here, we cast Prioritized Level Replay (PLR), an empirically successful but theoretically unmotivated method that selectively samples randomly-generated training levels, as UED. We argue that by curating completely random levels, PLR, too, can generate novel and complex levels for effective training. This insight reveals a natural class of UED methods we call Dual Curriculum Design (DCD). Crucially, DCD includes both PLR and a popular UED algorithm, PAIRED, as special cases and inherits similar theoretical guarantees. This connection allows us to develop novel theory for PLR, providing a version with a robustness guarantee at Nash equilibria. Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria. Indeed, our experiments confirm that our new method, PLR$^{\perp}$, obtains better results on a suite of out-of-distribution, zero-shot transfer tasks, in addition to demonstrating that PLR$^{\perp}$ improves the performance of PAIRED, from which it inherited its theoretical framework.
翻译:深强化学习( RL) 代理机构可以成功推广到新的环境。 不受监督的环境设计( UED) 是一个充满希望的自我监督的 RL 模式, 其中在培训期间,对未详细描述的环境的自由参数进行自动调整以适应代理机构的能力, 导致出现不同的培训环境。 在这里, 我们推出了优先级别重放( PLR), 这是一种经验上成功但理论上不鼓励的方法, 有选择地抽样随机生成的美元培训水平, 如 UED 。 我们争辩说, 完全随机的级别, PLR, 也可以产生新的和复杂的有效培训水平。 这个洞见揭示了UED方法的自然类别, 我们称之为双重课程设计( DCD ) 。 很显然, DCD 包括PLR 和 流行的 UED 算法, PAIRED, 作为特殊案例和继承了类似的理论保证。 这个连接使我们能够为 PLRR 开发新理论, 提供一种在 Nash equiliblibrial 获得的强性保障版本。 此外, 我们的理论显示PLRRRR 的高度改进了P- reallialalalizaliz) 框架: 改进了我们的数据, 改进了我们不甚甚甚甚甚甚高的Pralb, 改进了我们的PILBILI 。