This work proposes Fed-GLOSS-DP, a novel approach to privacy-preserving learning that uses synthetic data to train federated models. In our approach, the server recovers an approximation of the global loss landscape in a local neighborhood based on synthetic samples received from the clients. In contrast to previous, point-wise, gradient-based, linear approximation (such as FedAvg), our formulation enables a type of global optimization that is particularly beneficial in non-IID federated settings. We also present how it rigorously complements record-level differential privacy. Extensive results show that our novel formulation gives rise to considerable improvements in terms of convergence speed and communication costs. We argue that our new approach to federated learning can provide a potential path toward reconciling privacy and accountability by sending differentially private, synthetic data instead of gradient updates. The source code will be released upon publication.
翻译:这项工作提议了Fed-GLOSS-DP, 这是一种利用合成数据来培训联合模型的隐私保护学习新颖方法。 在我们的方法中,服务器根据客户收到的合成样本,恢复了当地社区全球损失状况的近似值。 与以前相比, 我们的配方提供了一种在非二维联盟环境下特别有益于非二维联盟环境的全球优化。 我们还介绍了它如何严格地补充记录级差异隐私。 广泛的结果显示,我们的新配方在趋同速度和通信成本方面带来了相当大的改进。 我们说,我们采用新配方学习的方法,通过发送差异化的私人合成数据,而不是梯度更新,可以提供调和隐私和问责的潜在途径。 源代码将在出版时发布。