Analyzing data owned by several parties while achieving a good trade-off between utility and privacy is a key challenge in federated learning and analytics. In this work, we introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized algorithms, i.e., when participants exchange information by communicating along the edges of a network graph without central coordinator. This relaxation, that we call network DP, captures the fact that users have only a local view of the system. To show the relevance of network DP, we study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it. For tasks such as real summation, histogram computation and optimization with gradient descent, we propose simple algorithms on ring and complete topologies. We prove that the privacy-utility trade-offs of our algorithms under network DP significantly improve upon what is achievable under LDP (sometimes even matching the utility of the trusted curator model), showing for the first time that formal privacy gains can be obtained from full decentralization. Our experiments illustrate the improved utility of our approach for decentralized training with stochastic gradient descent.
翻译:分析多个政党拥有的数据,同时在公用和隐私之间实现良好的平衡,这是联合学习和分析中的一项关键挑战。在这项工作中,我们引入了一种新颖的本地差异隐私(LDP)放松,这自然产生于完全分散的算法,即当参与者在网络图边缘进行交流,在没有中央协调者的情况下,在网络图边缘进行交流,从而交流信息。这种放松,即我们称之为网络DP,抓住了用户对系统只有当地观点的事实。为了显示网络 DP的相关性,我们研究了一种分散式计算模式,在网络图上标牌行走,并按顺序由接收方加以更新。对于真实的比较、直方图计算和以梯度下降优化等任务,我们提出了关于环形和完整的表面学的简单算法。我们证明,在网络DP下,我们的算法的隐私效用交易大大改进了在LDP下可以实现的目标(有时甚至与可信赖的卷架模型的效用相匹配),我们第一次研究了从全面分散化中获得正式隐私权收益的模型。我们的实验展示了我们分散式梯度培训方法的效用。