We study differentially private distributed optimization under communication constraints. A server using SGD for optimization aggregates the client-side local gradients for model updates using distributed mean estimation (DME). We develop a communication-efficient private DME, using the recently developed multi-message shuffled (MMS) privacy framework. We analyze our proposed DME scheme to show that it achieves the order-optimal privacy-communication-performance tradeoff resolving an open question in [1], whether the shuffled models can improve the tradeoff obtained in Secure Aggregation. This also resolves an open question on the optimal trade-off for private vector sum in the MMS model. We achieve it through a novel privacy mechanism that non-uniformly allocates privacy at different resolutions of the local gradient vectors. These results are directly applied to give guarantees on private distributed learning algorithms using this for private gradient aggregation iteratively. We also numerically evaluate the private DME algorithms.
翻译:在通信限制下,我们研究了不同的私人分布优化。一个使用 SGD 优化用户端本地梯度的服务器将用户端梯度汇总起来,以便使用分布平均估计(DME)进行模型更新。我们开发了一个通信高效的私人DME,使用最近开发的多信息洗发(MMS)的隐私框架。我们分析了我们提议的DME计划,以表明它实现了排序优化的隐私-通信-绩效平衡,解决了[1]中的一个未决问题,即洗发模式能否改善安全聚合中取得的权衡。这也解决了关于MMS模型中私人矢量总和的最佳交换的开放问题。我们通过一个新的隐私机制实现了这一点,即不统一地分配本地梯度矢量的不同分辨率的隐私。这些结果直接用于保证私人分布的学习算法,同时使用这种方法对私人梯度聚合进行迭接。我们还从数字上评估了私人DME算法。