Collaborative robots became a popular tool for increasing productivity in partly automated manufacturing plants. Intuitive robot teaching methods are required to quickly and flexibly adapt the robot programs to new tasks. Gestures have an essential role in human communication. However, in human-robot-interaction scenarios, gesture-based user interfaces are so far used rarely, and if they employ a one-to-one mapping of gestures to robot control variables. In this paper, we propose a method that infers the user's intent based on gesture episodes, the context of the situation, and common sense. The approach is evaluated in a simulated table-top manipulation setting. We conduct deterministic experiments with simulated users and show that the system can even handle personal preferences of each user.
翻译:协作机器人在部分自动化制造工厂中成为提高生产率的流行工具。 直觉机器人教学方法需要快速和灵活地使机器人程序适应新的任务。 手势在人类交流中起着关键作用。 但是,在人-机器人相互作用的情景中,基于手势的用户界面迄今很少使用,如果它们采用一对一的手势图绘制机器人控制变量的手势。 在本文中,我们提出了一个根据手势、情况背景和常识来推断用户意图的方法。 这种方法在模拟桌面操作环境中进行评估。 我们与模拟用户进行确定性实验,并显示系统甚至可以处理每个用户的个人偏好。