Robots are increasingly present in our lives, sharing the workspace and tasks with human co-workers. However, existing interfaces for human-robot interaction / cooperation (HRI/C) have limited levels of intuitiveness to use and safety is a major concern when humans and robots share the same workspace. Many times, this is due to the lack of a reliable estimation of the human pose in space which is the primary input to calculate the human-robot minimum distance (required for safety and collision avoidance) and HRI/C featuring machine learning algorithms classifying human behaviours / gestures. Each sensor type has its own characteristics resulting in problems such as occlusions (vision) and drift (inertial) when used in an isolated fashion. In this paper, it is proposed a combined system that merges the human tracking provided by a 3D vision sensor with the pose estimation provided by a set of inertial measurement units (IMUs) placed in human body limbs. The IMUs compensate the gaps in occluded areas to have tracking continuity. To mitigate the lingering effects of the IMU offset we propose a continuous online calculation of the offset value. Experimental tests were designed to simulate human motion in a human-robot collaborative environment where the robot moves away to avoid unexpected collisions with de human. Results indicate that our approach is able to capture the human\textsc's position, for example the forearm, with a precision in the millimetre range and robustness to occlusions.
翻译:机器人越来越多地出现在我们的生活中,与人类同事分享工作空间和任务。然而,现有的人机交互/协作(HRI/C)接口在使用直观性方面存在局限性,安全性是人与机器人分享同一工作空间时的主要关注点。很多时候,这是由于没有可靠的人体姿态空间估计所导致的,这是计算人机最小距离(用于安全和避免碰撞)和HRI/C特征机器学习算法分类人类行为/手势的主要输入。每种传感器类型都有其自身特点,当单独使用时可能出现诸如遮挡(视觉)和漂移(惯性)的问题。在本文中提出了一种综合系统,该系统将3D视觉传感器提供的人体跟踪与放置在人体肢体上的一组惯性测量单元(IMUs)提供的姿态估计相结合。IMUs消除了遮挡区域中的间隙以保持跟踪连续性。为了减轻IMU偏移的持续影响,我们提出了连续在线计算偏移值的方法。设计了实验测试来模拟人在人机协作环境中的运动,其中机器人向远离人的方向移动,以避免意外碰撞。结果表明,我们的方法能够以毫米级精度捕捉人体位置,对遮挡具有鲁棒性。