The existence of adversarial examples capable of fooling trained neural network classifiers calls for a much better understanding of possible attacks to guide the development of safeguards against them. This includes attack methods in the challenging non-interactive blackbox setting, where adversarial attacks are generated without any access, including queries, to the target model. Prior attacks in this setting have relied mainly on algorithmic innovations derived from empirical observations (e.g., that momentum helps), lacking principled transferability guarantees. In this work, we provide a theoretical foundation for crafting transferable adversarial examples to entire hypothesis classes. We introduce Adversarial Example Games (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier. AEG provides a new way to design adversarial examples by adversarially training a generator and a classifier from a given hypothesis class (e.g., architecture). We prove that this game has an equilibrium, and that the optimal generator is able to craft adversarial examples that can attack any classifier from the corresponding hypothesis class. We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets, outperforming prior state-of-the-art approaches with an average relative improvement of $29.9\%$ and $47.2\%$ against undefended and robust models (Table 2 & 3) respectively.
翻译:能够愚弄受过训练的神经网络分类人员的对抗性实例的存在要求人们更好地了解可能的攻击,以指导针对他们的保障措施的发展,其中包括挑战性非互动黑盒环境中的攻击方法,这种攻击方法产生对抗性攻击时没有任何机会接触目标模型,包括询问; 先前这种环境下的攻击主要依靠经验性观察(例如势头帮助)得出的算法创新,缺乏原则转移保证; 在这项工作中,我们为编造整个假设类可转让的对抗性例子提供了理论基础; 我们引入了反versarial实例运动(AEG),这是一个将对抗性例子作为攻击产生者和叙级者之间的微轴游戏的模式。 AEG提供了设计对抗性例子的新方法,通过对抗性训练发电机和某一假设类(例如结构)的叙级(例如,结构)。 我们证明,这种游戏是平衡的,最佳发电机能够编造出对抗任何来自相应假设类的分类者的对抗性例子。 我们展示了AEG在攻击以美元计价2美元和CIFAR-10平均数上,分别用美元和美元/10美元前的汇率改进模型。