Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information without compromising much on the privacy of their local sensitive data. In this paper, we introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting, based on the framework of Gaussian differential privacy. Federated $f$-differential privacy operates on record level: it provides the privacy guarantee on each individual record of one client's data against adversaries. We then propose a generic private federated learning framework {PriFedSync} that accommodates a large family of state-of-the-art FL algorithms, which provably achieves federated $f$-differential privacy. Finally, we empirically demonstrate the trade-off between privacy guarantee and prediction performance for models trained by {PriFedSync} in computer vision tasks.
翻译:联邦学习(FL)是一种培训范例,客户通过反复分享信息而合作学习模式,而不损害其当地敏感数据的隐私。在本文中,我们引入了联邦式的美元差异隐私,这是根据高西亚差异隐私框架专门为联邦制环境量身定制的新概念。联邦式的美元差异隐私在记录上运作:它为每个客户对对手的数据的个人记录提供隐私保障。然后我们提出一个通用的私人式的私人式联邦式学习框架{PriFedSync},这个框架将容纳一大批最先进的FL算法,可以实现联邦式的美元差异隐私。最后,我们从经验上证明了在计算机视觉任务中由{PrifedSync}培训的模型的隐私保障和预测性能之间的权衡。