Building agents capable of understanding language instructions is critical to effective and robust human-AI collaboration. Recent work focuses on training these instruction following agents via reinforcement learning in environments with synthetic language; however, these instructions often define long-horizon, sparse-reward tasks, and learning policies requires many episodes of experience. To this end, we introduce ELLA: Exploration through Learned Language Abstraction, a reward shaping approach that correlates high-level instructions with simpler low-level instructions to enrich the sparse rewards afforded by the environment. ELLA has two key elements: 1) A termination classifier that identifies when agents complete low-level instructions, and 2) A relevance classifier that correlates low-level instructions with success on high-level tasks. We learn the termination classifier offline from pairs of instructions and terminal states. Notably, in departure from prior work in language and abstraction, we learn the relevance classifier online, without relying on an explicit decomposition of high-level instructions to low-level instructions. On a suite of complex grid world environments with varying instruction complexities and reward sparsity, ELLA shows a significant gain in sample efficiency across several environments compared to competitive language-based reward shaping and no-shaping methods.
翻译:能够理解语言指导的建筑代理机构对于有效、稳健的人类-AI合作至关重要。最近的工作重点是通过在合成语言环境中强化学习来培训这些代理机构;然而,这些指令往往界定长视、低评任务和学习政策,需要许多经验。为此,我们引入ELLA:通过学习语言摘要来探索,这是一种奖励塑造方法,将高级指令与更简单的低层次指令联系起来,以丰富环境提供的微薄收益。ELLA有两个关键要素:(1) 终止分类者,确定代理机构何时完成低层次指令;和(2) 相关的分类者,将低层次指令与高级别任务的成功联系起来。我们从对指令和终端州学习了离线的解分级者。值得注意的是,在脱离语言和抽象学的先前工作时,我们在网上学习了相关的分类,而不必依靠对低层次指令的明确分解。关于复杂的复杂和奖励快速的复杂电网世界环境的组合,ELLA显示与竞争性语言奖赏制和无效方法相比,在多个环境中的抽样效率显著提高。