Deep Convolutional Neural Networks (CNN) models are one of the most popular networks in deep learning. With their large fields of application in different areas, they are extensively used in both academia and industry. CNN-based models include several exciting implementations such as early breast cancer detection or detecting developmental delays in children (e.g., autism, speech disorders, etc.). However, previous studies demonstrate that these models are subject to various adversarial attacks. Interestingly, some adversarial examples could potentially still be effective against different unknown models. This particular property is known as adversarial transferability, and prior works slightly analyzed this characteristic in a very limited application domain. In this paper, we aim to demystify the transferability threats in computer networks by studying the possibility of transferring adversarial examples. In particular, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. In our experiments, we consider five different attacks: (1) the Iterative Fast Gradient Method (I-FGSM), (2) the Jacobian-based Saliency Map attack (JSMA), (3) the L-BFGS attack, (4) the Projected Gradient Descent attack (PGD), and (5) the DeepFool attack. These attacks are performed against two well-known datasets: the N-BaIoT dataset and the Domain Generating Algorithms (DGA) dataset. Our results show that the transferability happens in specific use cases where the adversary can easily compromise the victim's network with very few knowledge of the targeted model.
翻译:深层神经网络(CNN)模型是深层学习中最受欢迎的网络之一。 这些模型在学术界和工业界都广泛使用。 CNN模型包括一些令人兴奋的功能,如早期乳腺癌检测或检测儿童发育延迟(如自闭症、言语障碍等)。 但是,以前的研究表明,这些模型受到各种对抗性攻击。 有趣的是,一些对抗性实例可能仍然对不同的未知模型有效。 这种特定属性被称为对抗性可转移性,先前的作品在非常有限的应用域略微分析了这一特征。 在本文中,我们的目的是通过研究转移对抗性实例的可能性来消除计算机网络中的可转移性威胁。 特别是,我们提供了第一次全面研究,评估CNNE计算机网络模型在对抗性可转移性攻击性方面的强性。 在我们的实验中,我们考虑了五种不同的攻击:(1) 异性快速梯度方法(I-FGSMS),(2) 基于雅各布的盐度地图攻击(JSMA),(3) L-BIGA 袭击的可靠性、DGA 数据分析性攻击中的易变数据。