In this paper we study the problem of learning multi-step dynamics prediction models (jumpy models) from unlabeled experience and their utility for fast inference of (high-level) plans in downstream tasks. In particular we propose to learn a jumpy model alongside a skill embedding space offline, from previously collected experience for which no labels or reward annotations are required. We then investigate several options of harnessing those learned components in combination with model-based planning or model-free reinforcement learning (RL) to speed up learning on downstream tasks. We conduct a set of experiments in the RGB-stacking environment, showing that planning with the learned skills and the associated model can enable zero-shot generalization to new tasks, and can further speed up training of policies via reinforcement learning. These experiments demonstrate that jumpy models which incorporate temporal abstraction can facilitate planning in long-horizon tasks in which standard dynamics models fail.
翻译:在本文中,我们研究了从无标记的经验中学习多步动态预测模型(跳跃模型)的问题,以及这些模型对于快速推断下游任务(高层次)计划的作用。我们特别建议学习一个跳跃模型,同时学习将空间嵌入离线的技能,从以前收集的经验中学习不需要标签或奖励说明的经验。然后我们研究利用这些已学成的成分与基于模型的规划或无模型的强化学习相结合以加速学习下游任务的几种选择。我们在RGB堆积环境中进行了一系列实验,表明利用所学技能和相关模型进行规划可以使新任务零光化,并通过强化学习进一步加快政策培训。这些实验表明,包含时间抽象的跳动模型有助于在标准动态模型失败的长方位任务中进行规划。</s>