Deep neural networks (DNNs) are increasingly being used in a variety of traditional radiofrequency (RF) problems. Previous work has shown that while DNN classifiers are typically more accurate than traditional signal processing algorithms, they are vulnerable to intentionally crafted adversarial perturbations which can deceive the DNN classifiers and significantly reduce their accuracy. Such intentional adversarial perturbations can be used by RF communications systems to avoid reactive-jammers and interception systems which rely on DNN classifiers to identify their target modulation scheme. While previous research on RF adversarial perturbations has established the theoretical feasibility of such attacks using simulation studies, critical questions concerning real-world implementation and viability remain unanswered. This work attempts to bridge this gap by defining class-specific and sample-independent adversarial perturbations which are shown to be effective yet computationally feasible in real-time and time-invariant. We demonstrate the effectiveness of these attacks over-the-air across a physical channel using software-defined radios (SDRs). Finally, we demonstrate that these adversarial perturbations can be emitted from a source other than the communications device, making these attacks practical for devices that cannot manipulate their transmitted signals at the physical layer.
翻译:以往的工作表明,虽然DNN分类者通常比传统的信号处理算法更准确,但是他们很容易受到故意制造的对抗性干扰,这种干扰会欺骗DNN分类者,并大大降低其准确性。这种故意对抗性扰动可以被俄罗斯联邦通信系统用来避免被动干扰和拦截系统,而这种系统依靠DNN分类者确定其目标调制方案。虽然以前关于RF对抗性扰动的研究已经通过模拟研究确定了这类攻击的理论可行性,但关于现实世界执行和可行性的关键问题仍然没有得到答复。这项工作试图弥合这一差距,办法是界定特定类别和样本独立的对抗性扰动,事实证明,这种扰动在实时和时间变换中是有效的,但计算也是可行的。我们用软件定义的无线电(SDRs)来证明这些超空攻击在物理频道上的有效性。最后,我们证明这些对抗性扰动性扰动性攻击可以从实际的通信装置以外的来源发出,使这些攻击成为实际的遥控装置。