We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to design and analyze new private optimization algorithms in a flexible manner. Focusing on the widely-used Alternating Directions Method of Multipliers (ADMM) method, we use our general framework to derive novel private ADMM algorithms for centralized, federated and fully decentralized learning. For these three algorithms, we establish strong privacy guarantees leveraging privacy amplification by iteration and by subsampling. Finally, we provide utility guarantees using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations.
翻译:我们研究了各种不同的私人(DP)机器学习算法,作为吵闹的固定点迭代的例子,以便从这一经过仔细研究的框架中获得隐私和效用结果。我们表明,这一新的观点恢复了流行的私人梯度法,如DP-SGD, 提供了以灵活方式设计和分析新的私人优化算法的原则性方法。我们侧重于广泛使用的乘数法的互换方向法,我们利用我们的一般框架为集中、联合和完全分散的学习生成新的私人ADM算法。对于这三种算法,我们建立了强大的隐私保障,通过迭代和子抽样来利用隐私扩增。最后,我们利用一种统一的分析来提供效用保障,这种分析利用最近对噪音固定点迭代法的线性趋同结果。</s>