Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settings and in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-object interactions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicy release the dataset at https://iplab.dmi.unict.it/MECCANO.
翻译:虽然在第三人视野中对人体物体相互作用进行了彻底调查,但这一问题在自我中心环境以及工业情景中都没有得到充分的研究。为了填补这一空白,我们引入了自我中心视频的第一个数据集,即自我中心视频数据集,以研究类似工业环境中人体物体相互作用。MECCANO被20名被要求建立机动车模型的与会者所获取,他们不得不与小物体和工具进行互动。数据集被明确标为从自我中心角度认识人体物体相互作用的任务。具体地说,每种相互作用都被标在时间(与行动部分)和空间(与积极物体捆绑框)的标签上。根据拟议的数据集,我们调查了四项不同的任务,其中包括:1)行动识别,2)主动物体探测,3)主动物体识别,3)主动物体识别和4)以自我中心人体物体相互作用探测,这是标准人类物体相互作用任务的一个重新审视版本。基准结果显示,MECCNO数据集是研究自我中心人类物体相互作用的一个具有挑战性的基准。在工业/气象平台中,我们公布数据。