Given their substantial success in addressing a wide range of computer vision challenges, Convolutional Neural Networks (CNNs) are increasingly being used in smart home applications, with many of these applications relying on the automatic recognition of human activities. In this context, low-power radar devices have recently gained in popularity as recording sensors, given that the usage of these devices allows mitigating a number of privacy concerns, a key issue when making use of conventional video cameras. Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks. It is, for instance, well-known that the combination of images and CNNs is vulnerable against adversarial examples, mischievous data points that force machine learning models to generate wrong classifications during testing time. In this paper, we investigate the vulnerability of radar-based CNNs to adversarial attacks, and where these radar-based CNNs have been designed to recognize human gestures. Through experiments with four unique threat models, we show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks. We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs by only perturbing the padding of the inputs, without touching the frames where the action itself occurs. Moreover, we observe that gradient-based attacks exercise perturbation not randomly, but on important features of the input data. We highlight these important features by making use of Grad-CAM, a popular neural network interpretability method, hereby showing the connection between adversarial perturbation and prediction interpretability.
翻译:由于低功率雷达装置在应对计算机视觉挑战方面取得了巨大成功,因此在应对一系列计算机视觉挑战方面取得了巨大成功,革命神经网络(CNNs)正越来越多地被用于智能家庭应用程序,其中许多应用都依赖于自动识别人类活动。在这方面,低功率雷达装置最近作为记录传感器越来越受欢迎,因为使用这些装置可以减轻一些隐私关切,这是使用常规视频摄像机时的一个关键问题。在设计智能家庭应用程序时经常提到的另一个关切是这些应用在应对网络攻击时的弹性。例如,众所周知,图像和CNN的组合在对抗性实例和错误数据点面前十分脆弱,迫使机器学习模型在测试时间生成错误的分类。在本文中,我们调查了基于雷达的CNN的弱点,因为使用这些基于雷达的CNN来识别人类的手势。通过四个独特的威胁模型的实验,我们显示,基于雷达的CNN很容易受到白色和黑箱的对抗性攻击。我们还揭露了极端的对抗性攻击案例的存在,在测试时间段期间,机器学习模型生成错误的分类错误的特性,在测试期间,我们只能通过雷达的定位来改变这些精确度,通过测算方法进行这种测算。