We model the dynamics of privacy loss in Langevin diffusion and extend it to the noisy gradient descent algorithm: we compute a tight bound on R\'enyi differential privacy and the rate of its change throughout the learning process. We prove that the privacy loss converges exponentially fast. This significantly improves the prior privacy analysis of differentially private (stochastic) gradient descent algorithms, where (R\'enyi) privacy loss constantly increases over the training iterations. Unlike composition-based methods in differential privacy, our privacy analysis does not assume that the noisy gradients (or parameters) during the training could be revealed to the adversary. Our analysis tracks the dynamics of privacy loss through the algorithm's intermediate parameter distributions, thus allowing us to account for privacy amplification due to convergence. We prove that our privacy analysis is tight, and also provide a utility analysis for strongly convex, smooth and Lipshitz loss functions.
翻译:我们模拟了Langevin传播中隐私损失的动态,并将其扩展到噪音梯度下降算法:我们计算了R\'enyi差异性隐私及其在整个学习过程中变化速度的严格约束。我们证明,隐私损失是成指数的。这大大改进了对差异私人(随机)梯度血统算法的先前隐私分析,即(R\'enyi)隐私损失在培训迭代中不断增加。与差异隐私中基于构成的方法不同,我们的隐私分析并不假定培训期间的噪音梯度(或参数)可以透露给对手。我们的分析通过算法的中间参数分布跟踪隐私损失的动态,从而允许我们核算由于趋同而出现的隐私扩增。我们证明我们的隐私分析是紧密的,并且为强烈的连接、光滑和利普什茨损失功能提供了实用性分析。