Recently, deep neural networks (DNNs) have been used extensively for automatic modulation classification (AMC), and the results have been quite promising. However, DNNs have high memory and computation requirements making them impractical for edge networks where the devices are resource-constrained. They are also vulnerable to adversarial attacks, which is a significant security concern. This work proposes a rotated binary large ResNet (RBLResNet) for AMC that can be deployed at the edge network because of low memory and computational complexity. The performance gap between the RBLResNet and existing architectures with floating-point weights and activations can be closed by two proposed ensemble methods: (i) multilevel classification (MC), and (ii) bagging multiple RBLResNets while retaining low memory and computational power. The MC method achieves an accuracy of $93.39\%$ at $10$dB over all the $24$ modulation classes of the Deepsig dataset. This performance is comparable to state-of-the-art (SOTA) performances, with $4.75$ times lower memory and $1214$ times lower computation. Furthermore, RBLResNet also has high adversarial robustness compared to existing DNN models. The proposed MC method with RBLResNets has an adversarial accuracy of $87.25\%$ over a wide range of SNRs, surpassing the robustness of all existing SOTA methods to the best of our knowledge. Properties such as low memory, low computation, and the highest adversarial robustness make it a better choice for robust AMC in low-power edge devices.
翻译:近年来,深度神经网络(DNN)已被广泛用于自动调制分类(AMC),结果非常有前途。然而,DNN 具有高内存和计算需求,这使它们难以应用于资源受限的边缘网络。它们还容易受到对抗性攻击,这是一个重大的安全问题。本文提出了一个旋转二值化的大型 ResNet(RBLResNet)用于 AMC,在边缘网络上具有低内存和计算复杂度。两种提议的集成方法可以弥补 RBLResNet 和具有浮点权重和激活的现有架构之间的性能差距:(i)多级分类(MC),(ii)装袋多个 RBLResNet,同时保持低内存和计算功率。MC 方法在 Deepsig 数据集的所有 24 种调制类别上的 10 dB 下达到了 93.39% 的准确率。该性能与最先进的性能相当,但内存低 4.75 倍,计算低 1214 倍。此外,与现有的 DNN 模型相比,RBLResNet 还具有高对抗鲁棒性。使用 RBLResNets 的所提出的 MC 方法在广泛的 SNR 范围内达到了 87.25% 的对抗准确率,超过了我们所知道的所有现有 SOTA 方法的鲁棒性。低内存,低计算和最高的对抗鲁棒性等特性使它成为低功率边缘设备上鲁棒 AMC 的更好选择。