Everyday tasks of long-horizon and comprising a sequence of multiple implicit subtasks still impose a major challenge in offline robot control. While a number of prior methods aimed to address this setting with variants of imitation and offline reinforcement learning, the learned behavior is typically narrow and often struggles to reach configurable long-horizon goals. As both paradigms have complementary strengths and weaknesses, we propose a novel hierarchical approach that combines the strengths of both methods to learn task-agnostic long-horizon policies from high-dimensional camera observations. Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors. Experiments in various simulated and real robot control tasks show that our formulation enables producing previously unseen combinations of skills to reach temporally extended goals by "stitching" together latent skills through goal chaining with an order-of-magnitude improvement in performance upon state-of-the-art baselines. We even learn one multi-task visuomotor policy for 25 distinct manipulation tasks in the real world which outperforms both imitation learning and offline reinforcement learning techniques.
翻译:长旋线每天的任务由多个隐含子任务序列组成,这在离线机器人控制方面仍构成一项重大挑战。虽然一些先前旨在处理这一背景的方法有模仿和脱线强化学习的变体,但学到的行为一般是狭窄的,而且往往难以达到可配置长旋线的目标。由于这两种模式都具有互补的强项和弱项,我们建议一种新型的等级化方法,将两种方法的优势结合起来,从高维相机观测中学习任务、不可知的长旋线政策。具体地说,我们结合了一种低层次的政策,通过模仿学习和从离线强化学习潜在行为前期技能而学到潜伏技能的高级政策。在各种模拟和真正的机器人控制任务中进行的实验表明,我们的配方能够产生先前看不见的技能组合,通过“缝合”和潜伏技能,通过目标链,在状态基线上提高性能的顺序。我们甚至学习了一种多段对流自动操作政策,用于在现实世界上学习25种截然不同的强化操纵技术。