WiFi-based smart human sensing technology enabled by Channel State Information (CSI) has received great attention in recent years. However, CSI-based sensing systems suffer from performance degradation when deployed in different environments. Existing works solve this problem by domain adaptation using massive unlabeled high-quality data from the new environment, which is usually unavailable in practice. In this paper, we propose a novel augmented environment-invariant robust WiFi gesture recognition system named AirFi that deals with the issue of environment dependency from a new perspective. The AirFi is a novel domain generalization framework that learns the critical part of CSI regardless of different environments and generalizes the model to unseen scenarios, which does not require collecting any data for adaptation to the new environment. AirFi extracts the common features from several training environment settings and minimizes the distribution differences among them. The feature is further augmented to be more robust to environments. Moreover, the system can be further improved by few-shot learning techniques. Compared to state-of-the-art methods, AirFi is able to work in different environment settings without acquiring any CSI data from the new environment. The experimental results demonstrate that our system remains robust in the new environment and outperforms the compared systems.
翻译:近年来,广道国家信息(CSI)所促成的基于WiFi的智能人类遥感技术受到极大关注。然而,基于CSI的遥感系统在不同环境中的部署中都受到性能退化的影响。现有工作通过使用新环境中的大规模无标签高质量数据来解决该问题,而新环境通常没有这种数据。在本论文中,我们提议建立一个新型的增强式的充满环境差异的强大WiFi手势识别系统,名为AirFi,它从新的角度处理环境依赖性问题。AirFi是一个全新的域域域域化框架,它了解CSI的关键部分,而不论环境不同,并将模型推广到看不见的情景,不需要收集任何数据来适应新环境。空基系统从几个培训环境环境中提取了共同的特征,并尽可能缩小了它们之间的分布差异。该特征得到进一步扩充,以便更有利于环境。此外,这个系统还可以通过少发的学习技术得到进一步的改进。与最新方法相比,AirFi能够在不同的环境环境中开展工作,而无需从新环境获得任何CSI数据。实验结果表明,我们的系统在新的环境中仍然保持稳健健健健。