This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/clients. In this setting, each client needs to ensure the privacy of its own data, even if the server or other clients act adversarially. This requirement motivates the study of local differential privacy (LDP) at the client level. We provide tight (up to logarithms) upper and lower bounds for LDP convex/strongly convex federated stochastic optimization with homogeneous (i.i.d.) client data. The LDP rates match the optimal statistical rates in certain practical parameter regimes ("privacy for free"). Remarkably, we show that similar rates are attainable for smooth losses with arbitrary heterogeneous client data distributions, via a linear-time accelerated LDP algorithm. We also provide tight upper and lower bounds for LDP federated empirical risk minimization (ERM). While a tight upper bound for ERM was provided in prior work, we use acceleration to attain this bound in fewer rounds of communication. Finally, with a secure "shuffler" to anonymize client reports (but without the presence of a trusted server), our algorithm attains the optimal central differentially private rates for stochastic convex/strongly convex optimization. Numerical experiments validate our theory and show favorable privacy-accuracy tradeoffs for our algorithm.
翻译:本文在缺少可靠的服务器/客户端的情况下研究联合学习(FL)的问题。 在这种环境下,每个客户都需要确保自己数据的隐私性,即使服务器或其他客户的行为是敌对的。 这一要求促使在客户一级研究本地差异隐私(LDP)问题。 我们为LDP convex/强 convex federal comchanitic 优化提供了紧( 接近对数) 上下界限, 与客户数据相同( i.d.) 客户数据相同。 LDP 率符合某些实用参数系统中的最佳统计率( “ 免费特权 ” ) 。 值得注意的是, 我们通过直线时间加速 LDP 算法, 显示对本地差异性隐私( LDP) 差异性( LDP) 差异性( LDP) 差异性( LDP) 差异性( LDP) 差异性( LDP) 差异性实验( ERM) 提供紧紧下上下上限。 虽然在先前的工作中提供了机构风险管理的上限, 我们使用加速度, 在较少的几轮通信中实现这一约束。 最后, 我们使用安全的“ 扼制” 软缩缩缩化客户机率( ), 的中央分析, 实现我们最优化的客户级( ) 的缩压压压压压压压压压的中央机的中央压压压,, 的中央压压压压, 的中央压。