The goal of continuous control is to synthesize desired behaviors. In reinforcement learning (RL)-driven approaches, this is often accomplished through careful task reward engineering for efficient exploration and running an off-the-shelf RL algorithm. While reward maximization is at the core of RL, reward engineering is not the only -- sometimes nor the easiest -- way for specifying complex behaviors. In this paper, we introduce \braxlines, a toolkit for fast and interactive RL-driven behavior generation beyond simple reward maximization that includes Composer, a programmatic API for generating continuous control environments, and set of stable and well-tested baselines for two families of algorithms -- mutual information maximization (MiMax) and divergence minimization (DMin) -- supporting unsupervised skill learning and distribution sketching as other modes of behavior specification. In addition, we discuss how to standardize metrics for evaluating these algorithms, which can no longer rely on simple reward maximization. Our implementations build on a hardware-accelerated Brax simulator in Jax with minimal modifications, enabling behavior synthesis within minutes of training. We hope Braxlines can serve as an interactive toolkit for rapid creation and testing of environments and behaviors, empowering explosions of future benchmark designs and new modes of RL-driven behavior generation and their algorithmic research.
翻译:连续控制的目标是综合预期行为。 在强化学习(RL)驱动的方法中,这通常是通过谨慎的任务奖励工程来完成的,目的是为了高效探索和运行现成的RL算法。虽然奖励最大化是RL的核心,但奖励工程并不是说明复杂行为的唯一 -- -- 有时也不是最简单的方法。在本文中,我们引入了\braxline,这是一个工具,用于快速和互动的RL驱动行为生成,而不只是简单的奖励最大化,其中包括复合器,一个用于创造持续控制环境的方案ABI模拟器,以及为两种算法组合设定了稳定和经过良好测试的基线 -- -- 相互信息最大化(MIMax)和差异最小化(DMin) -- -- 支持不受监督的技能学习和分配,作为其他行为规范模式。此外,我们讨论如何将这些算法标准化,这些算法不能再依赖简单的奖励最大化。我们的实施建立在Jax的硬件加速的BRAx模拟器上,在几分钟内进行最低限度的修改,能够促成行为合成。我们希望BRAxlines将新的行为模型和行为模型的快速测试作为互动的模型。