As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm. In this system, the model and the scaling algorithm have become attractive targets for numerous attacks, such as adversarial examples and the recent image-scaling attack. In response to these attacks, researchers have developed defense approaches that are tailored to attacks at each processing stage. As these defenses are developed in isolation, their underlying assumptions may not hold when viewing them from the perspective of an end-to-end machine learning system. Thus, it is necessary to study these attacks and defenses in the context of machine learning systems. In this paper, we investigate the interplay between vulnerabilities of the image scaling procedure and machine learning models in the challenging hard-label black-box setting. We propose a series of novel techniques to make a black-box attack exploit vulnerabilities in scaling algorithms, scaling defenses, and the final machine learning model in an end-to-end manner. Based on this scaling-aware attack, we reveal that most existing scaling defenses are ineffective under threat from downstream models. Moreover, we empirically observe that standard black-box attacks can significantly improve their performance by exploiting the vulnerable scaling procedure. We further demonstrate this problem on a commercial Image Analysis API with transfer-based black-box attacks.
翻译:由于真实世界图像的大小不同,机器学习模型是包含上游图像缩放算法的更大系统的一部分。在这个系统中,模型和缩放算法已经成为许多攻击的吸引目标,例如对抗性例子和最近的图像缩放攻击。为了应对这些攻击,研究人员制定了针对每个处理阶段攻击的防御方法。由于这些防御是孤立开发的,在从终端到终端机学习系统的角度看待这些攻击和防御时,其基本假设可能无法维持。因此,有必要从机器学习系统的角度研究这些攻击和防御。在这个系统中,模型和缩放算法已成为许多攻击的吸引力目标,例如对抗性例子和最近的图像缩放攻击。我们提出了一系列新颖技术,以黑箱攻击利用每个处理阶段的缩放算法、缩放防御和最后机器学习模型的脆弱性。根据这种缩放式的机器学习系统,我们发现,大多数现有的缩放防御在机器学习系统的威胁之下是无效的。此外,我们从实验上观察了图像缩放程序的脆弱性和结构式攻击的变换,我们用一个标准的Abox 分析模型来大大改进它们的变压。