Automated planning enables robots to find plans to achieve complex, long-horizon tasks, given a planning domain. This planning domain consists of a list of actions, with their associated preconditions and effects, and is usually manually defined by a human expert, which is very time-consuming or even infeasible. In this paper, we introduce a novel method for generating this domain automatically from human demonstrations. First, we automatically segment and recognize the different observed actions from human demonstrations. From these demonstrations, the relevant preconditions and effects are obtained, and the associated planning operators are generated. Finally, a sequence of actions that satisfies a user-defined goal can be planned using a symbolic planner. The generated plan is executed in a simulated environment by the TIAGo robot. We tested our method on a dataset of 12 demonstrations collected from three different participants. The results show that our method is able to generate executable plans from using one single demonstration with a 92% success rate, and 100% when the information from all demonstrations are included, even for previously unknown stacking goals.
翻译:自动规划可以让机器人找到实现复杂、 长视距任务的计划, 给一个规划域 。 这个规划域由一系列行动组成, 及其相关的先决条件和效果, 通常由一位人类专家手工定义, 这非常耗时, 甚至不可行 。 在本文中, 我们引入了一个新的方法, 自动生成此域, 从人类演示中自动生成 。 首先, 我们自动分割并识别人类演示中观察到的不同动作 。 从这些演示中, 获取了相关的先决条件和效果, 并产生了相关的规划操作员 。 最后, 一个满足用户定义目标的系列行动, 可以使用一个符号规划器进行规划 。 生成的计划是在一个由 TIAGOO 机器人模拟的环境中执行的 。 我们用从三个不同参与者收集的12个演示数据集测试了我们的方法 。 结果表明, 我们的方法能够产生出一个执行计划的方法, 使用92%的成功率的单一演示, 并在包含所有演示信息时达到100%, 即使是先前未知的堆叠目标 。