To assist with everyday human activities, robots must solve complex long-horizon tasks and generalize to new settings. Recent deep reinforcement learning (RL) methods show promises in fully autonomous learning, but they struggle to reach long-term goals in large environments. On the other hand, Task and Motion Planning (TAMP) approaches excel at solving and generalizing across long-horizon tasks, thanks to their powerful state and action abstractions. But they assume predefined skill sets, which limits their real-world applications. In this work, we combine the benefits of these two paradigms and propose an integrated task planning and skill learning framework named LEAGUE (Learning and Abstraction with Guidance). LEAGUE leverages symbolic interface of a task planner to guide RL-based skill learning and creates abstract state space to enable skill reuse. More importantly, LEAGUE learns manipulation skills in-situ of the task planning system, continuously growing its capability and the set of tasks that it can solve. We demonstrate LEAGUE on three challenging simulated task domains and show that LEAGUE outperforms baselines by a large margin, and that the learned skills can be reused to accelerate learning in new tasks and domains. Additional resource is available at https://bit.ly/3eUOx4N.
翻译:为了协助日常人类活动,机器人必须解决复杂的长视任务,并概括到新的环境。最近的深层强化学习(RL)方法显示了完全自主学习的希望,但它们在大环境中难以达到长期目标。另一方面,任务和运动规划(TAMP)方法在解决和普及长视任务方面非常出色,这要归功于其强大的状态和动作抽象。但是他们承担了预先界定的技能,这限制了其真实世界应用。在这项工作中,我们结合了这两个模式的好处,并提出了一个名为LEAGUE(学习和总结与指导)的综合任务规划和技能学习框架。LAGUE利用任务规划员的象征性界面,指导基于RL技能的学习,并创造抽象的空间,以便进行技能再利用。更重要的是,LEAGUE学会了任务规划系统本身的操纵技能,不断增强能力,以及它能够解决的任务组合。我们在三个具有挑战性的任务领域展示了LEAGUE, 并表明LEAGUE超越了以大空间为起点的基线。LAGUEAG/3,学习到新的技能可以加速进行新的再利用。