Human Activity Recognition (HAR) from wearable sensor data identifies movements or activities in unconstrained environments. HAR is a challenging problem as it presents great variability across subjects. Obtaining large amounts of labelled data is not straightforward, since wearable sensor signals are not easy to label upon simple human inspection. In our work, we propose the use of neural networks for the generation of realistic signals and features using human activity monocular videos. We show how these generated features and signals can be utilized, instead of their real counterparts, to train HAR models that can recognize activities using signals obtained with wearable sensors. To prove the validity of our methods, we perform experiments on an activity recognition dataset created for the improvement of industrial work safety. We show that our model is able to realistically generate virtual sensor signals and features usable to train a HAR classifier with comparable performance as the one trained using real sensor data. Our results enable the use of available, labelled video data for training HAR models to classify signals from wearable sensors.
翻译:人类活动识别(HAR)来自可磨损的传感器数据,它识别了不受限制环境中的移动或活动。HAR是一个具有挑战性的问题,因为它在各学科间有很大的变异性。获得大量贴标签的数据并非直截了当,因为在简单的人类检查时,可磨损的传感器信号不容易贴标签。在我们的工作中,我们建议使用神经网络来生成现实的信号和特征,使用人类活动单向视频来生成现实的信号和特征。我们展示如何利用这些生成的特征和信号,而不是其真实的对应方来培训可磨损传感器所获取的信号,来培训可辨认活动的HAR模型。为了证明我们的方法的有效性,我们对为改善工业工作安全而创建的活动识别数据集进行了实验。我们表明,我们的模型能够现实地生成虚拟的传感器信号和特征,用于培训具有可比性能的HAR分类器,与使用真实的传感器数据所培训的功能相似。我们的结果使得能够使用现有的、贴贴标签的视频数据来培训HAR模型,用于对可磨损传感器的信号进行分类。