In recent times, various modules such as squeeze-and-excitation, and others have been proposed to improve the quality of features learned from wearable sensor signals. However, these modules often cause the number of parameters to be large, which is not suitable for building lightweight human activity recognition models which can be easily deployed on end devices. In this research, we propose a feature learning module, termed WSense, which uses two 1D CNN and global max pooling layers to extract similar quality features from wearable sensor data while ignoring the difference in activity recognition models caused by the size of the sliding window. Experiments were carried out using CNN and ConvLSTM feature learning pipelines on a dataset obtained with a single accelerometer (WISDM) and another obtained using the fusion of accelerometers, gyroscopes, and magnetometers (PAMAP2) under various sliding window sizes. A total of nine hundred sixty (960) experiments were conducted to validate the WSense module against baselines and existing methods on the two datasets. The results showed that the WSense module aided pipelines in learning similar quality features and outperformed the baselines and existing models with a minimal and uniform model size across all sliding window segmentations. The code is available at https://github.com/AOige/WSense.
翻译:近来,各种模块例如squeeze-and-excitation等被提出以提高从可穿戴传感器信号中学习到的特征的质量。然而,这些模块通常会导致参数数量庞大,这不适用于构建轻量级的人体活动识别模型,这些模型可以轻松部署在终端设备上。在这项研究中,我们提出了一个特征学习模块,称为WSense,该模块使用两个1D CNN和全局最大池化层从可穿戴传感器数据中提取类似的质量特征,同时忽略由滑动窗口大小引起的活动识别模型的差异。在单个加速度计(WISDM)和融合加速度计、陀螺仪和磁力计的另一个(PAMAP2)数据集上,使用CNN和ConvLSTM特征学习管道进行了实验,在各种滑动窗口大小下进行了共960次实验来验证WSense模块与基线和现有方法在这两个数据集上的表现。结果表明,WSense模块有助于管道学习类似的质量特征,并且在所有滑动窗口分段上具有最小且均匀的模型大小,优于基线和现有模型。代码可在https://github.com/AOige/WSense中获取。