Recent developments in the field of model-based RL have proven successful in a range of environments, especially ones where planning is essential. However, such successes have been limited to deterministic fully-observed environments. We present a new approach that handles stochastic and partially-observable environments. Our key insight is to use discrete autoencoders to capture the multiple possible effects of an action in a stochastic environment. We use a stochastic variant of Monte Carlo tree search to plan over both the agent's actions and the discrete latent variables representing the environment's response. Our approach significantly outperforms an offline version of MuZero on a stochastic interpretation of chess where the opponent is considered part of the environment. We also show that our approach scales to DeepMind Lab, a first-person 3D environment with large visual observations and partial observability.
翻译:以模型为基础的RL领域的最新发展在一系列环境中证明是成功的,特别是在规划至关重要的环境中。然而,这些成功仅限于决定性的完全观察环境。我们提出了一种处理随机和部分观察环境的新方法。我们的关键洞察力是使用离散自动编码器来捕捉在随机环境中采取行动的多种可能影响。我们使用蒙特卡洛树搜索的随机变体来规划代理人的行动和代表环境反应的离散潜在变体。我们的方法在对棋子进行离线解释时大大优于Muzero的离线版本,而后者将对手视为环境的一部分。我们还展示了我们对深海三维实验室这一具有大视觉观测和部分可耐性的第一人称环境的尺度。