Building agents capable of understanding language instructions is critical to effective and robust human-AI collaboration. Recent work focuses on training these agents via reinforcement learning in environments with synthetic language; however, instructions often define long-horizon, sparse-reward tasks, and learning policies requires many episodes of experience. We introduce ELLA: Exploration through Learned Language Abstraction, a reward shaping approach geared towards boosting sample efficiency in sparse reward environments by correlating high-level instructions with simpler low-level constituents. ELLA has two key elements: 1) A termination classifier that identifies when agents complete low-level instructions, and 2) A relevance classifier that correlates low-level instructions with success on high-level tasks. We learn the termination classifier offline from pairs of instructions and terminal states. Notably, in departure from prior work in language and abstraction, we learn the relevance classifier online, without relying on an explicit decomposition of high-level instructions to low-level instructions. On a suite of complex BabyAI environments with varying instruction complexities and reward sparsity, ELLA shows gains in sample efficiency relative to language-based shaping and traditional RL methods.
翻译:能够理解语言指导的建筑代理机构对于有效、稳健的人类-AI合作至关重要。最近的工作重点是通过在合成语言环境中强化学习来培训这些代理机构;然而,这些指令往往界定长期和低报酬任务,而学习政策需要许多经验。 我们引入ELLA:通过学习语言摘要来探索ELLA:通过学习语言摘要来探索,这是一种奖励塑造方法,目的是通过将高级指令与更简单的低层次成分联系起来,提高稀薄的奖赏环境中的样本效率。ELLA有两个关键要素:(1) 终止分类机构,确定代理机构何时完成低层次指令,和(2) 与高层次任务成功相关的低层次指令相关关联的关联性分类机构。我们从两对指令和终点州学习了离线的终止分类机构。值得注意的是,我们偏离了先前的语言和抽象工作,我们在网上学习了相关的分类机构,而不必依赖对低层次指令的明确分解。关于爱的复杂且具有不同复杂性和奖励的复杂环境组合,ELLA显示与基于语言的塑造和传统RL方法相比,样本效率有所提高。