With the wide application of deep neural networks, it is important to verify a host's possession over a deep neural network model and protect the model. To meet this goal, various mechanisms have been designed. By embedding extra information into a network and revealing it afterward, the watermark becomes a competitive candidate in proving integrity for deep learning systems. However, concurrent watermarking schemes can hardly be adopted for emerging distributed learning paradigms that raise extra requirements during the ownership verification. A spearheading distributed learning paradigm is federated learning (FL) where many parties participate in training one single model. Each author participating in the FL should be able to verify its ownership independently. Moreover, there are other potential threat and corresponding security requirements under this scenario. To meet those requirements, in this paper, we demonstrate a watermarking protocol for protecting deep neural networks in the setting of FL. By incorporating the state-of-the-art watermarking scheme and the cryptological primitive designed for distributed storage, the protocol meets the need for ownership verification in the FL scenario without violating the privacy for each participant. This work paves the way for generalizing watermark as a practical security mechanism for protecting deep learning models in distributed learning platforms.
翻译:随着深层神经网络的广泛应用,必须核实东道方拥有深层神经网络模型并保护模型。为了实现这一目标,已经设计了各种机制。通过将额外信息嵌入网络并随后予以披露,水印成为证明深层学习系统完整性的竞争性候选人;然而,对于在所有权核查过程中产生额外要求的新的分布式学习模式,很难采用同时的水标记计划。在很多当事方参加培训单一模型的情况下,一个分布式学习模式是联合学习模式(FL),参加FL的每个作者都应能够独立核实其所有权。此外,在这种情景下还有其他潜在的威胁和相应的安全要求。为了满足这些要求,我们在本文件中展示了在FL环境中保护深层神经网络的一个水标记协议。通过纳入最新设计的水标记计划和为分布式储存设计的加密原始模式,协议满足了在FL情景中核实所有权的需要,而不会侵犯每个参与者的隐私。这项工作铺平了将水标记作为在深层学习模型中进行普及的实用安全模型。