We propose an information-theoretic technique for analyzing privacy guarantees of online algorithms. Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences. Our technique relies on generalizing the Dobrushin's contraction coefficient for total variation distance to an $f$-divergence known as $E_\gamma$-divergence. $E_\gamma$-divergence, in turn, is equivalent to approximate differential privacy. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.
翻译:我们建议了一种信息理论技术来分析在线算法的隐私保障。 具体地说,我们证明对迭代算法的差别隐私保障可以通过直接应用从极强的数据处理不平等中得出的收缩系数来确定。 我们的技术依赖于对多布鲁辛的收缩系数进行概括,以至所谓的“Eçgamma$-diverence”的总变异距离为“$Egamma$-diverence ” 。 $Egamma$-diverence 反过来相当于大致的差别隐私。 例如,我们运用了我们的技术来得出梯度下降的差别隐私参数。 此外,我们还表明,这一框架可以适用于分批学习算法,可以通过超过培训数据集的一通路执行。