In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a sufficient payload, without sacrificing the usability of the DNN model. It is also demonstrated that the proposed work is robust against common transforms and attacks designed for conventional watermarking approaches.
翻译:为了保护深神经网络(DNN)的知识产权(IP),许多现有的DNN水标记技术,为了保护深神经网络(DNN)的知识产权,许多现有的DNN水标记技术要么直接将水标记直接嵌入DNNN参数,要么通过微调DNN参数插入DN参数,插入后门水标记,但是,DNN参数无法抵制通过改变DNN参数清除水标记的各种攻击方法。在本文中,我们绕过这种攻击,采用结构性的水标记计划,利用通道将水标记嵌入主主 DNNN 结构中的水标记嵌入主 DNNN 结构,而不是设计DNNN 参数。在水标记嵌入期间,我们通过对DNNN参数的参数进行微调速率进行微调,直接将水标记直接嵌入DNNNN网参数直接嵌入主主 DNNNN 结构,而不是设计 DNNNN 参数。为了具体具体地在水标记嵌入期间,我们用DNNNN主主的内水标记内部渠道将DNNNNNN的管道连接水标记与水的内管道连接管道连接连接管道连接管道连接,我们提供足够的足够有效有效有效有效有效,同时提供足够的有效有效有效有效有效有效有效有效有效有效有效有效有效有效有效有效有效,同时,同时不牺牲DNNNNNNNNNNNNNNNNDM模型模式是DM模型,而不牺牲DNDNMMMMMM模型的,不牺牲DNSM号的,不牺牲DNSMM号的模型的模型的模型的运行,不牺牲了DNM号的模型的模型的功能的功能,也证明。