Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure. Stochastic gradient descent (SGD) is commonly used for FL due to its good empirical performance, but sensitive user information can still be inferred from weight updates shared during FL iterations. We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD. The trade-offs between user privacy, global utility, and transmission rate are proved by defining appropriate metrics for FL with LDP. Compared to existing results, the query sensitivity used in LDP is defined as a variable and a tighter privacy accounting method is applied. The proposed utility bound allows heterogeneous parameters over all users. Our bounds characterize how much utility decreases and transmission rate increases if a stronger privacy regime is targeted. Furthermore, given a target privacy level, our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
翻译:联邦学习(FL)因其分散结构,允许私下培训大量数据。由于经验表现良好,软体梯度下降(SGD)通常用于FL,但敏感用户信息仍然可以从FL迭代期间分享的重量更新中推断出来。我们认为高斯安机制可以保护FL模式中与SGD的本地用户数据差异隐私(LDP),用户隐私、全球通用和传输率之间的取舍通过界定与LDP的FL适当度量度来证明。与现有结果相比,LDP使用的查询灵敏度被界定为可变数,采用更严格的隐私会计方法。提议的通用约束允许所有用户使用不同参数。我们的界限说明了如果针对更强的隐私制度,效用减少和传输率会增加多少。此外,考虑到目标隐私水平,我们的结果保证了比现有的隐私核算方法大得多的效用和传输率更低。