As robots become more present in open human environments, it will become crucial for robotic systems to understand and predict human motion. Such capabilities depend heavily on the quality and availability of motion capture data. However, existing datasets of full-body motion rarely include 1) long sequences of manipulation tasks, 2) the 3D model of the workspace geometry, and 3) eye-gaze, which are all important when a robot needs to predict the movements of humans in close proximity. Hence, in this paper, we present a novel dataset of full-body motion for everyday manipulation tasks, which includes the above. The motion data was captured using a traditional motion capture system based on reflective markers. We additionally captured eye-gaze using a wearable pupil-tracking device. As we show in experiments, the dataset can be used for the design and evaluation of full-body motion prediction algorithms. Furthermore, our experiments show eye-gaze as a powerful predictor of human intent. The dataset includes 180 min of motion capture data with 1627 pick and place actions being performed. It is available at https://humans-to-robots-motion.github.io/mogaze and is planned to be extended to collaborative tasks with two humans in the near future.
翻译:随着机器人在开放的人类环境中日益露面,机器人系统将越来越了解和预测人类运动,这种能力将变得至关重要。这种能力在很大程度上取决于运动捕获数据的质量和可用性。然而,现有全体运动的数据集很少包括:(1) 操作任务的长序,(2) 工作空间几何模型的3D模型,(3) 视网膜,当机器人需要预测人类在近距离的移动时,这些都很重要。因此,在本文件中,我们为日常操作任务(包括上述任务)提供了一套全体运动运动的全体运动的新数据集。运动数据是使用基于反射标记的传统运动抓取系统采集的。我们利用可磨损的跟踪学生装置还捕获了视网。正如我们在实验中显示的那样,数据集可用于设计和评价全体运动预测算法。此外,我们的实验显示视网膜是人类意图的强大预测器。数据集包括180分钟的运动捕捉数据,其中含有1627个选取器和进行的地方行动。在 https://humans-robots-bots-to 中可以获取数据。我们又计划进行两个未来的协作任务。