Robots have been steadily increasing their presence in our daily lives, where they can work along with humans to provide assistance in various tasks on industry floors, in offices, and in homes. Automated assembly is one of the key applications of robots, and the next generation assembly systems could become much more efficient by creating collaborative human-robot systems. However, although collaborative robots have been around for decades, their application in truly collaborative systems has been limited. This is because a truly collaborative human-robot system needs to adjust its operation with respect to the uncertainty and imprecision in human actions, ensure safety during interaction, etc. In this paper, we present a system for human-robot collaborative assembly using learning from demonstration and pose estimation, so that the robot can adapt to the uncertainty caused by the operation of humans. Learning from demonstration is used to generate motion trajectories for the robot based on the pose estimate of different goal locations from a deep learning-based vision system. The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario. We show successful generalization of the system's operation to changes in the initial and final goal locations through various experiments.
翻译:机器人在我们的日常生活中的存在一直在稳步增加,他们可以与人类一起在工业楼层、办公室和住宅中协助开展各种任务。自动组装是机器人的主要应用之一,下一代组装系统通过创建合作型人类机器人系统可以提高效率。然而,尽管协作型机器人已经存在了几十年,但在真正协作型系统中的应用却有限。这是因为一个真正合作型的人类机器人系统需要调整其操作,以适应人类行动的不确定性和不精确性,确保互动期间的安全等。在本文中,我们提出了一个利用从演示中学习和作出估计的人类机器人合作组装系统,以便机器人能够适应人类操作造成的不确定性。从演示中学习用来产生机器人运动轨迹,根据深层学习型视觉系统不同目标地点的表面估计进行。在合作型人类机器人组装的情景中,拟议中的系统需要调整其操作过程。我们展示了该系统在人类机器人组装情况下的物理操纵器6 DoF 。我们展示了系统运行成功的一般化,通过初始目标和最终目标位置的变化。