Well-trained deep neural networks (DNN) are an indispensable part of the intellectual property of the model owner. However, the confidentiality of models are threatened by \textit{model piracy}, which steals a DNN and obfuscates the pirated model with post-processing techniques. To counter model piracy, recent works propose several model fingerprinting methods, which are commonly based on a special set of adversarial examples of the owner's classifier as the fingerprints, and verify whether a suspect model is pirated based on whether the predictions on the fingerprints from the suspect model and from the owner's model match with one another. However, existing fingerprinting schemes are limited to models for classification and usually require access to the training data. In this paper, we propose the first \textbf{T}ask-\textbf{A}gnostic \textbf{F}ingerprinting \textbf{A}lgorithm (TAFA) for the broad family of neural networks with rectified linear units. Compared with existing adversarial example-based fingerprinting algorithms, TAFA enables model fingerprinting for DNNs on a variety of downstream tasks including but not limited to classification, regression and generative modeling, with no assumption on training data access. Extensive experimental results on three typical scenarios strongly validate the effectiveness and the robustness of TAFA.
翻译:受过良好训练的深层神经网络(DNN)是模型拥有者知识产权不可或缺的一部分。 但是,模型的保密性受到以下因素的威胁:\ textit{ 模范盗版 盗用 DNN 并用后处理技术混淆盗版模式。为了打击盗版,最近的工作提出了几种示范指纹方法,这些方法通常以一套特别的对称例子为基础,即业主的指纹分类作为指纹,并核实一个嫌疑人模型是否被盗版,其依据是嫌疑人模型和所有人模型的指纹预测是否相互匹配。然而,现有的指纹计划仅限于分类模式,通常需要获取培训数据。为了打击盗版,我们建议采用第一种示范方法,即:Textb{Täsk- textb{A},这些样本通常基于典型的指纹方法,而不是基于典型的指纹分析算法的典型模型,而是基于典型的典型的DNA分析算法,用于对三类固度的模型进行测试。