Due to massive amounts of data distributed across multiple locations, distributed machine learning has attracted a lot of research interests. Alternating Direction Method of Multipliers (ADMM) is a powerful method of designing distributed machine learning algorithm, whereby each agent computes over local datasets and exchanges computation results with its neighbor agents in an iterative procedure. There exists significant privacy leakage during this iterative process if the local data is sensitive. In this paper, we propose a differentially private ADMM algorithm (P-ADMM) to provide dynamic zero-concentrated differential privacy (dynamic zCDP), by inserting Gaussian noise with linearly decaying variance. We prove that P-ADMM has the same convergence rate compared to the non-private counterpart, i.e., $\mathcal{O}(1/K)$ with $K$ being the number of iterations and linear convergence for general convex and strongly convex problems while providing differentially private guarantee. Moreover, through our experiments performed on real-world datasets, we empirically show that P-ADMM has the best-known performance among the existing differentially private ADMM based algorithms.
翻译:由于在多个地点分布了大量数据,分散的机器学习吸引了许多研究兴趣。不同的乘数方向方法(ADMM)是设计分布式机器学习算法的有力方法,根据这种方法,每个代理商用迭接程序对本地数据集进行计算并与邻国代理商交换计算结果。如果本地数据敏感,在迭接过程中会有大量隐私渗漏。在本文中,我们建议采用差异化的私人ADM算法(P-ADM)提供动态零集中差分隐私(动态 zCDP),通过插入带有线性衰变差异的高斯噪音。我们证明,P-ADMMM与非私人对应方(即$\mathcal{O}(1/K))的趋同率相同,而美元是普通的 convex 和强烈的 convex 问题的迭接率。此外,我们通过在现实世界数据集上进行的实验,我们的经验显示P-ADMMM在现有的差异性私人ADM算法中表现得最佳。