In this paper, we report the results of our latest work on the automated generation of planning operators from human demonstrations, and we present some of our future research ideas. To automatically generate planning operators, our system segments and recognizes different observed actions from human demonstrations. We then proposed an automatic extraction method to detect the relevant preconditions and effects from these demonstrations. Finally, our system generates the associated planning operators and finds a sequence of actions that satisfies a user-defined goal using a symbolic planner. The plan is deployed on a simulated TIAGo robot. Our future research directions include learning from and explaining execution failures and detecting cause-effect relationships between demonstrated hand activities and their consequences on the robot's environment. The former is crucial for trust-based and efficient human-robot collaboration and the latter for learning in realistic and dynamic environments.
翻译:在本文中,我们报告了我们从人类演示中自动生成规划操作员的最新工作结果,我们介绍了我们未来的一些研究想法。为了自动生成规划操作员,我们系统各部分,并承认人类演示的不同观察行动。然后我们提出了一种自动提取方法,以检测这些演示的相关先决条件和影响。最后,我们的系统生成了相关规划操作员,并找到了一系列行动,用一个象征性的规划员满足了用户定义的目标。该计划部署在模拟的TIAGO机器人上。我们未来的研究方向包括学习和解释执行失败,以及发现演示的手活动及其在机器人环境中的后果之间的因果关系。前者对于基于信任和高效的人类机器人合作至关重要,后者对于在现实和动态的环境中学习至关重要。