Decentralized optimization is gaining increased traction due to its widespread applications in large-scale machine learning and multi-agent systems. The same mechanism that enables its success, i.e., information sharing among participating agents, however, also leads to the disclosure of individual agents' private information, which is unacceptable when sensitive data are involved. As differential privacy is becoming a de facto standard for privacy preservation, recently results have emerged integrating differential privacy with distributed optimization. However, directly incorporating differential privacy design in existing distributed optimization approaches significantly compromises optimization accuracy. In this paper, we propose to redesign and tailor gradient methods for differentially-private distributed optimization, and propose two differential-privacy oriented gradient methods that can ensure both rigorous epsilon-differential privacy and optimality. The first algorithm is based on static-consensus based gradient methods, and the second algorithm is based on dynamic-consensus (gradient-tracking) based distributed optimization methods and, hence, is applicable to general directed interaction graph topologies. Both algorithms can simultaneously ensure almost sure convergence to an optimal solution and a finite privacy budget, even when the number of iterations goes to infinity. To our knowledge, this is the first time that both goals are achieved simultaneously. Numerical simulations using a distributed estimation problem and experimental results on a benchmark dataset confirm the effectiveness of the proposed approaches.
翻译:分散化优化由于在大型机器学习和多试剂系统中的广泛应用而正在获得更大的牵引力。同样的机制使得其成功,即参与机构之间的信息共享,但也导致披露单个代理人的私人信息,这在涉及敏感数据时是不可接受的。随着不同隐私正在成为隐私保护的一个事实上的标准,最近的结果出现了将不同隐私与分布式优化相结合的结果。然而,将不同隐私设计直接纳入现有分布式优化方法会大大降低优化的准确性。在本文件中,我们提议重新设计和定制梯度方法,以进行差别式私营分配优化,并提出两种差异性基调梯度方法,既能确保严格的普西龙差异性隐私和最佳性。第一种算法基于静态-一致的梯度方法,而第二种算法则基于动态-一致(纵向跟踪),以分布式优化法为基础,因此适用于一般定向互动图表的表性。两种算法可以同时确保接近于最佳解决方案和有限的隐私预算,即使其数量达到精确度,也能够确保严格的普惠性隐私选择两种方法,既能确保严格的普惠性隐私隐私隐私隐私隐私隐私隐私和优化。同时确定我们使用这一数据是同时使用模拟数据,同时确定一个模型的结果。同时提出数据是使用模型的结果。同时确定一个模型。使用这一模型,而同时提出数据是使用模型的模型分析的结果。使用一个模型。</s>