Increasing popularity of deep-learning-powered applications raises the issue of vulnerability of neural networks to adversarial attacks. In other words, hardly perceptible changes in input data lead to the output error in neural network hindering their utilization in applications that involve decisions with security risks. A number of previous works have already thoroughly evaluated the most commonly used configuration - Convolutional Neural Networks (CNNs) against different types of adversarial attacks. Moreover, recent works demonstrated transferability of the some adversarial examples across different neural network models. This paper studied robustness of the new emerging models such as SpinalNet-based neural networks and Compact Convolutional Transformers (CCT) on image classification problem of CIFAR-10 dataset. Each architecture was tested against four White-box attacks and three Black-box attacks. Unlike VGG and SpinalNet models, attention-based CCT configuration demonstrated large span between strong robustness and vulnerability to adversarial examples. Eventually, the study of transferability between VGG, VGG-inspired SpinalNet and pretrained CCT 7/3x1 models was conducted. It was shown that despite high effectiveness of the attack on the certain individual model, this does not guarantee the transferability to other models.
翻译:深层学习动力应用日益受人欢迎提出了神经网络易受对抗性攻击的脆弱性问题,换句话说,输入数据几乎没有明显的变化导致神经网络的产出错误,阻碍了在涉及安全风险决定的应用程序中使用这些网络。以前的一些著作已经对最常用的配置进行了彻底评估----针对不同类型的对抗性攻击的进化神经网络(CCNNs),此外,最近的工程表明,在不同神经网络模型中,一些对抗性实例是可以转移的。本文研究了新的新兴模型,如基于SpinalNet的神经网络和CFAR-10数据集图像分类问题的动态变异器(CCT)是否健全。每个结构都经过四次白箱攻击和三次黑箱攻击的测试。与VGG和SpinalNet模型不同,基于关注的CCT配置显示,在强大的强力和对对抗性例子的脆弱性之间有很大的距离。最后,对VGG、VGG-启发性SpinalNet和预先训练的CCT 7/3x1模型之间的可转移性进行了研究。它表明,尽管对特定模型的攻击具有很高的可靠性,但其他模型并没有保证。