Alternating Direction Method of Multipliers (ADMM) is a widely used tool for machine learning in distributed settings, where a machine learning model is trained over distributed data sources through an interactive process of local computation and message passing. Such an iterative process could cause privacy concerns of data owners. The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning. Prior approaches on differentially private ADMM exhibit low utility under high privacy guarantee and often assume the objective functions of the learning problems to be smooth and strongly convex. To address these concerns, we propose a novel differentially private ADMM-based distributed learning algorithm called DP-ADMM, which combines an approximate augmented Lagrangian function with time-varying Gaussian noise addition in the iterative process to achieve higher utility for general objective functions under the same differential privacy guarantee. We also apply the moments accountant method to bound the end-to-end privacy loss. The theoretical analysis shows that DP-ADMM can be applied to a wider class of distributed learning problems, is provably convergent, and offers an explicit utility-privacy tradeoff. To our knowledge, this is the first paper to provide explicit convergence and utility properties for differentially private ADMM-based distributed learning algorithms. The evaluation results demonstrate that our approach can achieve good convergence and model accuracy under high end-to-end differential privacy guarantee.
翻译:在分布式环境中,机器学习模式是通过当地计算和传递信息的互动过程对分布式数据源进行培训的。这种迭代过程可能会引起数据拥有者的隐私问题。本文件的目的是为基于该程序分布式的机器学习提供不同的隐私。在高隐私保障下,对不同私人的ADMM的先前办法的效用较低,而且往往承担学习问题的客观功能,以便顺利和有力地解决这些关切。为了解决这些关切,我们建议采用一种新的基于ADMM的、有差别的私人分布式学习算法,称为DP-ADMM,该算法将大约扩大的Lagrangian函数与迭代过程的变换时间的Gaussian噪声添加结合起来,以便在同一差异性隐私保障下实现对一般目标功能的更大效用。我们还采用时间算法来约束端对端的隐私损失。理论分析表明,DP-ADMMMM可以适用于更广泛的分布式学习问题,可以明显趋同,并且提供明确的通用-隐私交易模式。对于我们的知识来说,在迭代过程中增加时间性拼法的精确性,这是我们进行公开性分析的结果。