Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures. However, targeted adversarial examples -- optimized to be classified as a chosen target class -- tend to be less transferable between architectures. While prior research on constructing transferable targeted attacks has focused on improving the optimization procedure, in this work we examine the role of the source classifier. Here, we show that training the source classifier to be "slightly robust" -- that is, robust to small-magnitude adversarial examples -- substantially improves the transferability of class-targeted and representation-targeted adversarial attacks, even between architectures as different as convolutional neural networks and transformers. The results we present provide insight into the nature of adversarial examples as well as the mechanisms underlying so-called "robust" classifiers.
翻译:神经网络图像分类的反向实例已知是可以转让的:优化后被源分类器错误分类的例子往往被不同结构的分类者错误分类。然而,有针对性的对抗性实例 -- -- 优化后被分类为选定的目标类别 -- -- 往往在结构之间较少可转让性。虽然先前关于构建可转让的定向袭击的研究侧重于改进优化程序,但我们在这项工作中审视源分类器的作用。在这里,我们表明,对源分类器的培训“稍强强” -- -- 即强于小放大的对抗性实例 -- -- 大大提高了类别针对性和代表式对立性攻击的可转让性,甚至改善了像革命神经网络和变形器等不同结构之间的可转让性。我们提出的研究结果揭示了对抗性实例的性质以及所谓的“罗布特”分类器的基本机制。