In Machine Learning as a Service, a provider trains a deep neural network and gives many users access. The hosted (source) model is susceptible to model stealing attacks, where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model. We propose a fingerprinting method for deep neural network classifiers that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a subclass of transferable adversarial examples which we call conferrable adversarial examples that exclusively transfer with a target label from a source model to its surrogates. We propose a new method to generate these conferrable adversarial examples. We present an extensive study on the irremovability of our fingerprint against fine-tuning, weight pruning, retraining, retraining with different architectures, three model extraction attacks from related work, transfer learning, adversarial training, and two new adaptive attacks. Our fingerprint is robust against distillation, related model extraction attacks, and even transfer learning when the attacker has no access to the model provider's dataset. Our fingerprint is the first method that reaches a ROC AUC of 1.0 in verifying surrogates, compared to a ROC AUC of 0.63 by previous fingerprints.
翻译:在机器学习服务中,一个供应商培训一个深层神经网络网络,并让许多用户访问。主机(源)模型很容易被模拟盗窃攻击,其中敌人从一个源模型访问源模型获得代用模型。为了在临时检测这类攻击后,供应商需要一种强有力的方法来确定一个嫌疑人模型是否是其模型的代用。我们为深神经网络分类者提出了一种指纹方法,从源模型中提取一系列投入,以便只有代理人同意源模型对这种投入的分类。这些投入是可转让对抗性范例的子类,我们称之为可转让对抗性范例,完全用源模型标签转换到源模型到源模型。我们提出了一种新的方法来生成这些可授予的对抗性实例。我们提出了关于我们的指纹在微调、重量调整、再培训、使用不同结构的再培训、从相关工作、转让学习、对抗性培训和两次新的适应性攻击。我们的指纹指向可蒸馏的模型、相关的模型提取攻击攻击3 模型提取模型攻击性攻击性攻击性攻击性攻击性攻击性攻击性C时,甚至将原始的指纹转换为ROC的A。