In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a high watermark capacity, without sacrificing the usability of the DNN model. It is also demonstrated that the work is robust against common transforms and attacks designed for conventional watermarking approaches.
翻译:为了保护深神经网络(DNN)的知识产权(IP),许多现有的DNN水标记技术是,为了保护深神经网络(DNN)的知识产权(IP),许多现有的DNN水标记技术要么直接将水标记直接嵌入DNNN参数,要么通过微调DNN参数插入DNN参数,插入后门水标记,但是,DNN参数无法抵制通过改变DNN参数清除水标记的各种攻击方法。在本文中,我们绕过这种攻击,采用结构性的水标记计划,利用通道将水标记嵌入主机 DNNNN 结构中,而不是设计DNNN参数。在水标记嵌入期间,我们具体地将水标记直接嵌入DNNNN主的内部通道插入DNNNN参数参数,或通过对DNNNN的频道参数进行微调速率进行微调插入。在水标记提取过程中,通过确定DNNN模型的管道运行率,因为运行机制的优势,DNNN模型在水嵌入时保留其原有任务的绩效。实验结果显示,拟议工作使嵌水标记的水标记能够可靠回收,提供高水标记,提供高的水标记能力能力,提供高的水标记能力能力能力能力,同时提供高的水标记能力,同时提供高的水标记能力能力,在水,不牺牲了DNNNNNNM号模型是示范。