Differential privacy is a de facto privacy framework that has seen adoption in practice via a number of mature software platforms. Implementation of differentially private (DP) mechanisms has to be done carefully to ensure end-to-end security guarantees. In this paper we study two implementation flaws in the noise generation commonly used in DP systems. First we examine the Gaussian mechanism's susceptibility to a floating-point representation attack. The premise of this first vulnerability is similar to the one carried out by Mironov in 2011 against the Laplace mechanism. Our experiments show attack's success against DP algorithms, including deep learners trained using differentially-private stochastic gradient descent. In the second part of the paper we study discrete counterparts of the Laplace and Gaussian mechanisms that were previously proposed to alleviate the shortcomings of floating-point representation of real numbers. We show that such implementations unfortunately suffer from another side channel: a novel timing attack. An observer that can measure the time to draw (discrete) Laplace or Gaussian noise can predict the noise magnitude, which can then be used to recover sensitive attributes. This attack invalidates differential privacy guarantees of systems implementing such mechanisms. We demonstrate that several commonly used, state-of-the-art implementations of differential privacy are susceptible to these attacks. We report success rates up to 92.56% for floating point attacks on DP-SGD, and up to 99.65% for end-to-end timing attacks on private sum protected with discrete Laplace. Finally, we evaluate and suggest partial mitigations.
翻译:不同隐私是一种事实上的隐私框架,在实践中通过一些成熟的软件平台得到采纳。 实施不同的私人(DP)机制必须谨慎,以确保终端到终端的安全保障。 在本文的第二部分中,我们研究了DP系统常用的噪音生成过程中的两个实施缺陷。 首先,我们研究了高萨机制容易发生浮点代表器袭击的情况。 第一个脆弱性的前提类似于2011年Mironov针对Laplace机制实施的一个框架。 我们的实验显示,对DP算法的袭击是成功的,包括使用有差异的私人局部偏差梯级下降的深层学习者。 在论文的第二部分中,我们研究了拉普尔和高斯安机制的不同对应方的安装缺陷。 我们发现,这种实施方式不幸地受到另一个侧面渠道的伤害:新时间攻击。 一位测量时间的观察者可以预测( displete) Laplace 或高斯的噪音, 从而可以用来恢复敏感的私人偏差梯度。 本次攻击使拉普尔和高斯基的隐私度保障最终用于实施这种机制。