Analyzing data owned by several parties while achieving a good trade-off between utility and privacy is a key challenge in federated learning and analytics. In this work, we introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized protocols, i.e., when participants exchange information by communicating along the edges of a network graph. This relaxation, that we call network DP, captures the fact that users have only a local view of the decentralized system. To show the relevance of network DP, we study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it. For tasks such as real summation, histogram computation and optimization with gradient descent, we propose simple algorithms on ring and complete topologies. We prove that the privacy-utility trade-offs of our algorithms significantly improve upon LDP, and in some cases even match what can be achieved with methods based on trusted/secure aggregation and shuffling. Our experiments illustrate the superior utility of our approach when training a machine learning model with stochastic gradient descent.
翻译:分析多个政党拥有的数据,同时在公用和隐私之间实现良好的平衡,这是联合学习和分析中的一项关键挑战。在这项工作中,我们引入了一种新颖的本地差异隐私(LDP)的放松,这种放松自然产生于完全分散的协议,即当参与者在网络图边缘交流信息时,通过在网络图边缘进行交流。这种放松,我们称之为网络DP,捕捉用户对分散系统只有当地观点的事实。为了显示网络DP的相关性,我们研究一种分散的计算模式,在网络图上标牌行走,并按顺序由接收方更新。对于真实的加和、直方图计算和以梯度梯度下降优化等任务,我们提出了关于环形和完整地结构学的简单算法。我们证明,我们算法的私利性交易在LDP上大有改进,在某些情况下甚至与基于可信/可靠汇总和重新组合的方法相匹配。我们的实验表明,在培训机械学习模型时,我们的方法在以斜度梯度梯度梯度梯度梯度下降方面很有优势。