Federated learning models are collaboratively developed upon valuable training data owned by multiple parties. During the development and deployment of federated models, they are exposed to risks including illegal copying, re-distribution, misuse and/or free-riding. To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i.e., FedIPR. We propose a novel federated deep neural network (FedDNN) ownership verification scheme that allows private watermarks to be embedded and verified to claim legitimate IPR of FedDNN models. In the proposed scheme, each client independently verifies the existence of the model watermarks and claims respective ownership of the federated model without disclosing neither private training data nor private watermark information. The effectiveness of embedded watermarks is theoretically justified by the rigorous analysis of conditions under which watermarks can be privately embedded and detected by multiple clients. Moreover, extensive experimental results on computer vision and natural language processing tasks demonstrate that varying bit-length watermarks can be embedded and reliably detected without compromising original model performances. Our watermarking scheme is also resilient to various federated training settings and robust against removal attacks.
翻译:联邦学习模式是根据多方拥有的宝贵培训数据合作开发的; 在开发和部署联邦模式期间,他们面临各种风险,包括非法复制、再分发、滥用和(或)自由驾驶等; 为了应对这些风险,联邦学习模式的所有权核查是保护联邦学习模式知识产权的先决条件,即FedIPPR。 我们提议建立一个新型的联邦深神经网络(FedDNN)所有权核查计划,允许私人水印嵌入和核查,以主张FedDNN模式的合法知识产权; 在拟议的计划中,每个客户独立核实模型水印的存在,并声称联邦模式的各自所有权,而不披露私人培训数据或私人水印信息。从理论上讲,对水印在哪些条件下可以私下嵌入和由多个客户检测的严格分析,证明嵌入的水标记的有效性。此外,关于计算机愿景和自然语言处理任务的广泛实验结果表明,在不损害原始模型性能的情况下,可以嵌入和可靠地检测不同的位长度水印。我们的水印计划也具有弹性,对各种模型性能进行弹性。