Radio frequency (RF) fingerprinting, which extracts unique hardware imperfections of radio devices, has emerged as a promising physical-layer device identification mechanism in zero trust architectures and beyond 5G networks. In particular, deep learning (DL) methods have demonstrated state-of-the-art performance in this domain. However, existing approaches have primarily focused on enhancing system robustness against temporal and spatial variations in wireless environments, while the security vulnerabilities of these DL-based approaches have often been overlooked. In this work, we systematically investigate the security risks of DL-based RF fingerprinting systems through an adversarial-driven experimental analysis. We observe a consistent misclassification behavior for DL models under domain shifts, where a device is frequently misclassified as another specific one. Our analysis based on extensive real-world experiments demonstrates that this behavior can be exploited as an effective backdoor to enable external attackers to intrude into the system. Furthermore, we show that training DL models on raw received signals causes the models to entangle RF fingerprints with environmental and signal-pattern features, creating additional attack vectors that cannot be mitigated solely through post-processing security methods such as confidence thresholds.
翻译:射频指纹识别通过提取无线电设备的独特硬件缺陷,已成为零信任架构及超5G网络中一种有前景的物理层设备身份识别机制。特别是,深度学习方法在该领域已展现出最先进的性能。然而,现有方法主要集中于增强系统对无线环境中时空变化的鲁棒性,而这些基于深度学习的方法的安全漏洞常被忽视。本研究通过对抗驱动的实验分析,系统性地探究了基于深度学习的射频指纹识别系统的安全风险。我们观察到深度学习模型在域偏移下存在一致的误分类行为,即设备常被误识别为另一特定设备。基于大量真实世界实验的分析表明,该行为可被利用为一种有效的后门,使外部攻击者能够侵入系统。此外,我们发现基于原始接收信号训练深度学习模型会导致模型将射频指纹与环境及信号模式特征相纠缠,从而产生额外的攻击向量,这些无法仅通过置信度阈值等后处理安全方法缓解。