Federated learning (FL) is an emerging privacy-preserving paradigm, where a global model is trained at a central server while keeping client data local. However, FL can still indirectly leak private client information through model updates during training. Differential privacy (DP) can be employed to provide privacy guarantees within FL, typically at the cost of degraded final trained model. In this work, we consider a heterogeneous DP setup where clients are considered private by default, but some might choose to opt out of DP. We propose a new algorithm for federated learning with opt-out DP, referred to as \emph{FeO2}, along with a discussion on its advantages compared to the baselines of private and personalized FL algorithms. We prove that the server-side and client-side procedures in \emph{FeO2} are optimal for a simplified linear problem. We also analyze the incentive for opting out of DP in terms of performance gain. Through numerical experiments, we show that \emph{FeO2} provides up to $9.27\%$ performance gain in the global model compared to the baseline DP FL for the considered datasets. Additionally, we show a gap in the average performance of personalized models between non-private and private clients of up to $3.49\%$, empirically illustrating an incentive for clients to opt out.
翻译:联邦学习(FL)是一个新兴的隐私保护模式,在中央服务器上培训一个全球模式,同时保持客户数据本地化。然而,FL仍然可以通过培训期间的模型更新间接泄漏私人客户信息。不同隐私(DP)可以用来在FL内部提供隐私保障,通常以退化的最后培训模式为代价。在这项工作中,我们考虑一种差异化的DP设置,即客户默认被视为私人,但有些人可能选择选择退出DP。我们提出一种新的算法,用于选择退出DP(称为emph{FeO2})的进化学习。我们提出一个新的算法,同时讨论其与私人和个人化FL算法基线相比的优势。我们证明,在\emph{FeO2}的服务器和客户侧端程序对于简化的线性问题来说是最佳的。我们还分析了在业绩收益方面选择退出DP的动机。通过数字实验,我们显示,与基准DPFL$=私人客户之间的业绩差距高达9.27美元。