Bounding privacy leakage over compositions, i.e., privacy accounting, is a key challenge in differential privacy (DP). However, the privacy parameter ($\varepsilon$ or $\delta$) is often easy to estimate but hard to bound. In this paper, we propose a new differential privacy paradigm called estimate-verify-release (EVR), which addresses the challenges of providing a strict upper bound for privacy parameter in DP compositions by converting an estimate of privacy parameter into a formal guarantee. The EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output based on the verification result. The core component of the EVR is privacy verification. We develop a randomized privacy verifier using Monte Carlo (MC) technique. Furthermore, we propose an MC-based DP accountant that outperforms existing DP accounting techniques in terms of accuracy and efficiency. Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
翻译:在差分隐私(DP)中,组成部分的隐私泄露边界——即隐私核算是一个关键的挑战,然而,隐私参数($\varepsilon$ 或 $\delta$)通常很容易估算,但很难界定。在本文中,我们提出了一种新的差分隐私范式,称为估计-验证-释放(EVR),通过将隐私参数的估计转化为正式保证,解决了提供DP组成部分隐私参数严格上限的挑战。EVR范式首先估算机制的隐私参数,然后验证是否满足该保证,最后根据验证结果发布查询输出。EVR的核心组件是隐私验证。我们使用蒙特卡罗(MC)技术开发了一个随机化的隐私验证工具。此外,我们提出了一种基于MC算法的DP账单系统,其在准确性和效率方面优于现有的DP账单技术。我们的实证评估显示,新提出的EVR范式改善了隐私保护机器学习的效用-隐私权衡。