Differential privacy (DP) is the de facto standard for private data release and private machine learning. Auditing black-box DP algorithms and mechanisms to certify whether they satisfy a certain DP guarantee is challenging, especially in high dimension. We propose relaxations of differential privacy based on new divergences on probability distributions: the kernel R\'enyi divergence and its regularized version. We show that the regularized kernel R\'enyi divergence can be estimated from samples even in high dimensions, giving rise to auditing procedures for $\varepsilon$-DP, $(\varepsilon,\delta)$-DP and $(\alpha,\varepsilon)$-R\'enyi DP.
翻译:差异隐私(DP)是私人数据发布和私人机器学习的实际标准。 审计黑箱 DP 算法和机制以证明它们是否满足一定的DP保证是具有挑战性的, 特别是在高维方面。 我们提议根据概率分布上的新差异而放宽差异性隐私: R'enyi 内部差异及其常规版本。 我们表明, 常规的R' enyi 核心差异可以从样本中估算, 即使是高维的样本, 从而导致对 $\varepsilon$-DP、$(\varepsilon,\delta)$-DP 和$(\alpha,\varepsilon)$-R\' enyi DP 的审计程序。