With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view, occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition, multi-modal activity recognition is attracting increasing attention. However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition. Nowadays, deep learning in real world has led to a focus on continual learning that often suffers from catastrophic forgetting. But the catastrophic forgetting problem for egocentric activity recognition, especially in the context of multiple modalities, remains unexplored due to unavailability of dataset. In order to assist this research, we present a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL, which is collected by self-developed glasses integrating a first-person camera and wearable sensors. It contains synchronized data of videos, accelerometers, and gyroscopes, for 32 types of daily activities, performed by 10 participants. Its class types and scale are compared with other publicly available datasets. The statistical analysis of the sensor data is given to show the auxiliary effects for different behaviors. And results of egocentric activity recognition are reported when using separately, and jointly, three modalities: RGB, acceleration, and gyroscope, on a base network architecture. To explore the catastrophic forgetting in continual learning tasks, four baseline methods are extensively evaluated with different multi-modal combinations. We hope the UESTC-MMEA-CL can promote future studies on continual learning for first-person activity recognition in wearable applications.
翻译:随着可磨损相机的迅速发展,大量收集自我中心视频供第一人直观感知使用。使用自我中心视频来预测第一人的活动面临许多挑战,包括视野有限、排斥和不稳定动作。看到从可磨损装置获得的感官数据有助于对人类活动的认知,多式活动认识正在引起越来越多的关注。然而,相关数据集的不足阻碍了多式深层学习以自我中心活动识别的多式深层次学习。如今,现实世界的深层次学习导致持续学习的焦点,而这往往会受到灾难性的遗忘。但自我中心持续活动识别的灾难性遗忘问题,特别是在多种模式的背景下,由于缺少数据集而仍未得到解决。为了协助这一研究,我们为持续学习名为UESTC-MMEA-CL的多式自我中心活动提供了多式自我中心活动数据集。通过将第一人称相机和耗损感知的传感器集收集的自制眼镜,它包含视频同步数据、加速度计和陀螺仪。对于由10名参与者共同进行的32种日常活动来说,对于以自我中心为中心的活动的自我中心的活动来说, 仍然没有问题。 其分类和自我中心式的模型分析是公开的模型, 以不同式的模型和自我中心式的模型的模型的模型的模型的模型, 。在使用不同式的模型分析中,它可以展示的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型,它可以展示活动是用于。