Deep neural networks have empowered accurate device-free human activity recognition, which has wide applications. Deep models can extract robust features from various sensors and generalize well even in challenging situations such as data-insufficient cases. However, these systems could be vulnerable to input perturbations, i.e. adversarial attacks. We empirically demonstrate that both black-box Gaussian attacks and modern adversarial white-box attacks can render their accuracies to plummet. In this paper, we firstly point out that such phenomenon can bring severe safety hazards to device-free sensing systems, and then propose a novel learning framework, SecureSense, to defend common attacks. SecureSense aims to achieve consistent predictions regardless of whether there exists an attack on its input or not, alleviating the negative effect of distribution perturbation caused by adversarial attacks. Extensive experiments demonstrate that our proposed method can significantly enhance the model robustness of existing deep models, overcoming possible attacks. The results validate that our method works well on wireless human activity recognition and person identification systems. To the best of our knowledge, this is the first work to investigate adversarial attacks and further develop a novel defense framework for wireless human activity recognition in mobile computing research.
翻译:深心神经网络增强了准确的无装置人类活动的识别能力,这种识别具有广泛的应用性。深心模型可以从各种传感器中提取强健的特征,甚至在数据缺乏的情况下也能广泛推广。然而,这些系统可能容易受到输入干扰,即对抗性攻击。我们从经验上表明,黑箱高斯攻击和现代对抗性白箱攻击都能够使其快速下降。在本文中,我们首先指出,这种现象会给无装置的遥感系统带来严重的安全危害,然后提出一个新的学习框架“安全警报”,以捍卫普通攻击。“安全警报”的目的是实现一致的预测,而不管其输入是否受到攻击,减轻因对抗性攻击造成的分布干扰的负面影响。广泛的实验表明,我们提出的方法可以大大增强现有深层模型的稳健性,克服可能发生的攻击。结果证实,我们的方法在无线人类活动识别和个人识别系统方面效果良好。我们最了解的是,这是调查对抗性攻击的首项工作,并进一步开发了人类无线活动识别的移动活动的新防御框架。