Despite being raised as a problem over ten years ago, the imprecision of floating point arithmetic continues to cause privacy failures in the implementations of differentially private noise mechanisms. In this paper, we highlight a new class of vulnerabilities, which we call \emph{precision-based attacks}, and which affect several open source libraries. To address this vulnerability and implement differentially private mechanisms on floating-point space in a safe way, we propose a novel technique, called \emph{interval refining}. This technique has minimal error, provable privacy, and broad applicability. We use interval refining to design and implement a variant of the Laplace mechanism that is equivalent to sampling from the Laplace distribution and rounding to a float. We report on the performance of this approach, and discuss how interval refining can be used to implement other mechanisms safely, including the Gaussian mechanism and the exponential mechanism.
翻译:尽管浮点算术的不精确是十年前提出的一个问题,但浮点算术的不精确性继续造成实施差别私人噪音机制的隐私故障。 在本文中,我们强调一种新的脆弱性类别,我们称之为“基于精确度的攻击”,它影响到几个开放源库。为了解决这种脆弱性,并以安全的方式在浮点空间上实施不同的私人机制,我们建议一种叫作“emph{interval精炼”的新技术。这种技术有极小的错误、可证实的隐私和广泛适用性。我们用间隙精炼来设计和实施拉比机制的变种,相当于从拉普特分布和四舍四舍五入到浮动的样本。我们报告这一方法的执行情况,并讨论如何利用间隔精炼来安全地实施其他机制,包括高斯机制和指数机制。