We present new methods for assessing the privacy guarantees of an algorithm with regard to R\'enyi Differential Privacy. To the best of our knowledge, this work is the first to address this problem in a black-box scenario, where only algorithmic outputs are available. To quantify privacy leakage, we devise a new estimator for the R\'enyi divergence of a pair of output distributions. This estimator is transformed into a statistical lower bound that is proven to hold for large samples with high probability. Our method is applicable for a broad class of algorithms, including many well-known examples from the privacy literature. We demonstrate the effectiveness of our approach by experiments encompassing algorithms and privacy enhancing methods that have not been considered in related works.
翻译:我们提出了评估R\'enyi差异隐私权算法的隐私保障的新方法。 据我们所知,这项工作是第一个在黑盒情景中解决这一问题,因为只有算法输出。为了量化隐私渗漏,我们设计了一个新的测算器,用于测测R\'enyi对一对输出分布的偏差。这个测算器被转换成一个统计上较低的界限,被证明具有高概率的大型样本。我们的方法适用于广泛的算法,包括许多众所周知的隐私文献中的例子。我们通过实验展示了我们的方法的有效性,其中包括在相关工作中没有考虑过的算法和增强隐私的方法。