The shuffle model of differential privacy has gained significant interest as an intermediate trust model between the standard local and central models [EFMRTT19; CSUZZ19]. A key result in this model is that randomly shuffling locally randomized data amplifies differential privacy guarantees. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously [BEMMRLRKTS17]. In this work, we improve the state of the art privacy amplification by shuffling results both theoretically and numerically. Our first contribution is the first asymptotically optimal analysis of the R\'enyi differential privacy parameters for the shuffled outputs of LDP randomizers. Our second contribution is a new analysis of privacy amplification by shuffling. This analysis improves on the techniques of [FMT20] and leads to tighter numerical bounds in all parameter settings.
翻译:不同隐私的打拼模式作为标准地方模型和中央模型[EFMRTT19;CSUZZ1919]之间的中间信任模式,已引起极大兴趣。这一模式的主要结果是随机地打乱本地随机数据会扩大不同隐私的保障。这种放大意味着对匿名提供数据的系统[BEMMRLRKTS17]的隐私保障大大加强。在这项工作中,我们通过在理论上和数字上打乱结果,改进了现代隐私的放大。我们的第一个贡献是首次对LDP随机器的打乱输出的R\'enyi差异隐私参数进行不那么最佳的分析。我们的第二个贡献是通过打乱对隐私的拼贴进行新的分析。这一分析改进了[FMT20]的技术,并导致在所有参数设置中更加严格的数字界限。