In this paper we present a framework to learn skills from human demonstrations in the form of geometric nullspaces, which can be executed using a robot. We collect data of human demonstrations, fit geometric nullspaces to them, and also infer their corresponding geometric constraint models. These geometric constraints provide a powerful mathematical model as well as an intuitive representation of the skill in terms of the involved objects. To execute the skill using a robot, we combine this geometric skill description with the robot's kinematics and other environmental constraints, from which poses can be sampled for the robot's execution. The result of our framework is a system that takes the human demonstrations as input, learns the underlying skill model, and executes the learnt skill with different robots in different dynamic environments. We evaluate our approach on a simulated industrial robot, and execute the final task on the iCub humanoid robot.
翻译:在本文中,我们提出了一个从人类演示中学习技能的框架,其形式是几何空空空间,可以用机器人执行。我们收集人类演示数据,为其配置几何空空空空间,并推断相应的几何限制模型。这些几何限制提供了强大的数学模型,并直观地展示了相关物体方面的技能。为了运用机器人来应用这一技能,我们把这种几何技能描述与机器人的运动学和其他环境限制结合起来,从中可以对机器人的构成进行取样,供机器人执行。我们框架的结果是一个将人类演示作为投入的系统,学习基本技能模型,在不同动态环境中与不同机器人一起执行所学的技能。我们评估了模拟工业机器人的方法,并完成了iCub人造机器人的最后任务。