Human gesture recognition with Radio Frequency (RF) signals has attained acclaim due to the omnipresence, privacy protection, and broad coverage nature of RF signals. These gesture recognition systems rely on neural networks trained with a large number of labeled data. However, the recognition model trained with data under certain conditions would suffer from significant performance degradation when applied in practical deployment, which limits the application of gesture recognition systems. In this paper, we propose an unsupervised domain adaptation framework for RF-based gesture recognition aiming to enhance the performance of the recognition model in new conditions by making effective use of the unlabeled data from new conditions. We first propose pseudo-labeling and consistency regularization to utilize unlabeled data for model training and eliminate the feature discrepancies in different domains. Then we propose a confidence constraint loss to enhance the effectiveness of pseudo-labeling, and design two corresponding data augmentation methods based on the characteristic of the RF signals to strengthen the performance of the consistency regularization, which can make the framework more effective and robust. Furthermore, we propose a cross-match loss to integrate the pseudo-labeling and consistency regularization, which makes the whole framework simple yet effective. Extensive experiments demonstrate that the proposed framework could achieve 4.35% and 2.25% accuracy improvement comparing with the state-of-the-art methods on public WiFi dataset and millimeter wave (mmWave) radar dataset, respectively.
翻译:暂无翻译