Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source at https://github.com/facebookresearch/nle.
翻译:强化学习(RL)算法的进展与测试当前方法极限的具有挑战性的环境的发展是同步的。虽然现有的RL环境要么足够复杂,要么以快速模拟为基础,但两者都很少。在这里,我们介绍了NetHack学习环境(NLE),这是基于流行的单一玩家终极无赖游戏(NetHack)进行的研究的一个可缩放、程序生成、随机、丰富和具有挑战性的环境。我们认为,NetHack十分复杂,足以推动对探索、规划、技能获取和语言条件的RL等问题进行长期研究,同时大幅减少收集大量经验所需的计算资源。我们比较NLE及其任务组合与现有替代方法,并讨论它为什么是测试RL代理器的稳健性和系统化的理想媒介。我们用分布式的深RL基线和随机网络蒸馏研究,以及环境培训的各种代理的定性分析,展示了游戏早期阶段的经验成功。 NLEEE是https://github.com/facebregglement/lesearning/leargy)的公开来源。