We present a framework for learning hierarchical policies from demonstrations, using sparse natural language annotations to guide the discovery of reusable skills for autonomous decision-making. We formulate a generative model of action sequences in which goals generate sequences of high-level subtask descriptions, and these descriptions generate sequences of low-level actions. We describe how to train this model using primarily unannotated demonstrations by parsing demonstrations into sequences of named high-level subtasks, using only a small number of seed annotations to ground language in action. In trained models, the space of natural language commands indexes a combinatorial library of skills; agents can use these skills to plan by generating high-level instruction sequences tailored to novel goals. We evaluate this approach in the ALFRED household simulation environment, providing natural language annotations for only 10% of demonstrations. It completes more than twice as many tasks as a standard approach to learning from demonstrations, matching the performance of instruction following models with access to ground-truth plans during both training and evaluation.
翻译:我们提出了一个从示威中学习等级政策的框架,使用稀少的自然语言说明来指导为自主决策而发现可重复使用的技能。我们制定了一个行动序列的基因模型,在其中,目标产生高层次子任务描述序列,这些描述产生低层次行动的序列。我们描述如何主要通过将示范分为有名的高级别子任务序列,仅使用少量种子说明作为实地行动语言的代名高级任务序列,来培训这一模型。在经过培训的模型中,自然语言指令的空间将一个技能组合图书馆作为索引;代理人可以使用这些技能进行规划,生成适合新目标的高级教学序列。我们评估了ALFRED家庭模拟环境中的这一方法,仅为10%的演示提供了自然语言说明。它完成的两倍多的任务,作为从示威中学习的标准方法,在培训和评估期间,将遵循教学模式的成绩与获取地面真相计划相匹配。