Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output. Existing black-box methods for creating adversarial instances are costly, often using gradient estimation or training a replacement network. This paper introduces \textit{Attackar}, an evolutionary, score-based, black-box attack. Attackar is based on a novel objective function that can be used in gradient-free optimization problems. The attack only requires access to the output logits of the classifier and is thus not affected by gradient masking. No additional information is needed, rendering our method more suitable to real-life situations. We test its performance with three different state-of-the-art models -- Inception-v3, ResNet-50, and VGG-16-BN -- against three benchmark datasets: MNIST, CIFAR10 and ImageNet. Furthermore, we evaluate Attackar's performance on non-differential transformation defenses and state-of-the-art robust models. Our results demonstrate the superior performance of Attackar, both in terms of accuracy score and query efficiency.
翻译:深神经网络(DNNS) 在各种情景中,包括黑盒情景中,攻击者只被允许查询经过训练的模型并接收输出。 现有的创建对抗实例的黑盒方法成本高昂, 通常使用梯度估计或培训替换网络。 本文介绍\ textit{Attackar}, 一种进化、 分基、 黑盒攻击。 攻击者基于可用于无梯度优化问题的新目标功能。 攻击只需要访问分类器的产出日志, 因此不受梯度遮罩的影响。 不需要额外信息, 使我们的方法更适合现实生活情况。 我们用三种不同的先进模型 -- -- Inception- v3、 Res- 50和 VGG-16- BN -- 测试其性能, 这三个基准数据集是: MNIST、 CIFAR10 和图像网络。 此外, 我们评估攻击者在非差别转换防御和状态强势模型上的性能表现。 我们的结果显示攻击者在分数和精确度方面的效率。