In the context of DP-SGD each round communicates a local SGD update which leaks some new information about the underlying local data set to the outside world. In order to provide privacy, Gaussian noise is added to local SGD updates. However, privacy leakage still aggregates over multiple training rounds. Therefore, in order to control privacy leakage over an increasing number of training rounds, we need to increase the added Gaussian noise per local SGD update. This dependence of the amount of Gaussian noise $\sigma$ on the number of training rounds $T$ may impose an impractical upper bound on $T$ (because $\sigma$ cannot be too large) leading to a low accuracy global model (because the global model receives too few local SGD updates). DP-SGD much less competitive compared to other existing privacy techniques. We show for the first time that for $(\epsilon,\delta)$-differential privacy $\sigma$ can be chosen equal to $\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ regardless the total number of training rounds $T$. In other words, $\sigma$ does not depend on $T$ anymore (and aggregation of privacy leakage increases to a limit). This important discovery brings DP-SGD to practice because $\sigma$ can remain small to make the trained model have high accuracy even for large $T$ as usually happens in practice.
翻译:在DP-SGD每轮培训中,每个回合都传达了本地SGD更新内容,向外部世界透露了一些有关当地基本数据集的新信息。为了提供隐私,在本地SGD更新中增加了高斯噪音。然而,隐私泄漏仍然集中到多个培训回合中。因此,为了控制越来越多的培训回合中的隐私泄漏,我们需要在本地SGD更新中增加增加高斯噪音。高斯噪音对培训回合数量的依赖性是GGD$=GM$=GM$=GM$=2(T$=Epslon=$=$=QGM$=GM$=GD$=GGN$=GNG$=GN$=GN$=GN$=GN$=GNGD$=GOF$=GM$=GOD$=GOF$=$=GOF$=GM$=GOD$=GO$=GN=GOD=GN=GF$=GN=GN=GN=GOD=GO $=GOD=GN=GN=GN=GN=GN=GN=GLID=GOD=GN=GN=GOD=GOD=GN=GN=GD=GD=GOD=GD$=美元=GD=GD=GD=GD=GD=GL=GD=GD=GD=GOD=GN=GL=GL=GL=GOD=GL=GD=GD=GD=GD=GD=GOD=GL=GL=GD=GD$=GOD=GD=GD=GD=GD=GD=GD=GD=GD=GD=GD=GD=GL=GLILILILILIL=GL=GD=N=GL=GL=GL=GL=GL=IL=GL=GL=GL=GL=GL=====GL=GL=GL=IL=GL=IL=