Imitation learning is a popular method for teaching robots new behaviors. However, most existing methods focus on teaching short, isolated skills rather than long, multi-step tasks. To bridge this gap, imitation learning algorithms must not only learn individual skills but also an abstract understanding of how to sequence these skills to perform extended tasks effectively. This paper addresses this challenge by proposing a neuro-symbolic imitation learning framework. Using task demonstrations, the system first learns a symbolic representation that abstracts the low-level state-action space. The learned representation decomposes a task into easier subtasks and allows the system to leverage symbolic planning to generate abstract plans. Subsequently, the system utilizes this task decomposition to learn a set of neural skills capable of refining abstract plans into actionable robot commands. Experimental results in three simulated robotic environments demonstrate that, compared to baselines, our neuro-symbolic approach increases data efficiency, improves generalization capabilities, and facilitates interpretability.
翻译:模仿学习是教授机器人新行为的一种常用方法。然而,现有方法大多侧重于教授简短、孤立的技能,而非长序列、多步骤的任务。为弥补这一差距,模仿学习算法不仅需要学习个体技能,还需掌握如何将这些技能序列化以有效执行扩展任务的抽象理解。本文通过提出一种神经符号模仿学习框架来解决这一挑战。该系统首先利用任务演示学习一种符号表示,该表示对底层状态-动作空间进行抽象。学习到的表示将任务分解为更简单的子任务,并允许系统利用符号规划生成抽象计划。随后,系统利用此任务分解学习一组神经技能,这些技能能够将抽象计划细化为可执行的机器人指令。在三个模拟机器人环境中的实验结果表明,与基线方法相比,我们的神经符号方法提高了数据效率,增强了泛化能力,并促进了可解释性。