Conventional federated learning directly averages model weights, which is only possible for collaboration between models with homogeneous architectures. Sharing prediction instead of weight removes this obstacle and eliminates the risk of white-box inference attacks in conventional federated learning. However, the predictions from local models are sensitive and would leak training data privacy to the public. To address this issue, one naive approach is adding the differentially private random noise to the predictions, which however brings a substantial trade-off between privacy budget and model performance. In this paper, we propose a novel framework called FEDMD-NFDP, which applies a Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework. Our extensive experimental results on various datasets validate that FEDMD-NFDP can deliver not only comparable utility and communication efficiency but also provide a noise-free differential privacy guarantee. We also demonstrate the feasibility of our FEDMD-NFDP by considering both IID and non-IID setting, heterogeneous model architectures, and unlabelled public datasets from a different distribution.
翻译:常规联合学习直接平均的模型加权数,只有具有同质结构的模型之间才能进行协作。共享预测而不是重量消除了这一障碍,消除了常规联合学习中白箱推断攻击的风险。然而,地方模型的预测很敏感,会向公众泄露培训数据隐私。为解决这一问题,一种天真的做法是在预测中增加有差异的私人随机噪音,但这种噪音却在隐私预算与模型性能之间带来重大平衡。在本文中,我们提议了一个称为FEDMD-NFDP的新框架,将无噪音无差异隐私(NFDP)机制应用到一个联合模型蒸馏框架。我们在各种数据集上的广泛实验结果证实,FEDMD-NFDP不仅能够提供可比的效用和通信效率,而且还提供了无噪音的隐私保障。我们还通过考虑IID和非IID设置、混合模型结构以及不同分布的无标签的公共数据集,展示了我们的FEDMDMD-NFDP的可行性。