Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
翻译:在治疗情景中应用的机器人,例如用于治疗自闭症光谱障碍患者的治疗,有时被用于模仿学习活动,在这种活动中,一个人需要重复机器人的运动。为了简化纳入机器人能够执行的新型运动的任务,机器人最好能够通过观察人类(如治疗师)的演示来学习运动。在本文中,我们调查从机器人中心RGB-D相机收集的人体骨骼观察中获取动作的方法。根据各种关节的观察顺序,在由PID定位控制器执行之前,对联合位置进行绘图,以与机器人的配置相匹配。我们评估方法,特别是复制错误,方法是与QTrobot进行一项研究,机器人从多个参与者那里获得不同的上体舞蹈动作。结果显示了方法的总体可行性,但也表明复制质量受到骨骼观察噪音的影响。