We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables. To quantify its privacy guarantees, we analyze the privacy loss distribution via a numerical evaluation and provide a sharp bound on the R\'enyi divergence between two shifted Skellam distributions. While useful in both centralized and distributed privacy applications, we investigate how it can be applied in the context of federated learning with secure aggregation under communication constraints. Our theoretical findings and extensive experimental evaluations demonstrate that the Skellam mechanism provides the same privacy-accuracy trade-offs as the continuous Gaussian mechanism, even when the precision is low. More importantly, Skellam is closed under summation and sampling from it only requires sampling from a Poisson distribution -- an efficient routine that ships with all machine learning and data analysis software packages. These features, along with its discrete nature and competitive privacy-accuracy trade-offs, make it an attractive alternative to the newly introduced discrete Gaussian mechanism.
翻译:我们引入了多维Skellam机制, 这是一种基于两个独立的Poisson随机变量差异的离散差异隐私机制。 为了量化其隐私保障, 我们通过数字评估分析隐私损失分布, 并对两个被转移的Skellam分布之间的R\' enyi差异提供清晰的链接。 虽然在集中和分布的隐私应用中都有用, 我们调查它如何在通信限制下, 安全地结合联合学习的情况下应用。 我们的理论调查结果和广泛的实验性评估表明, Skellam机制提供了与连续高斯机制相同的隐私保密性权衡取舍, 即使精确度较低。 更重要的是, Skellam 关闭了对比和取样, 只需要从Poisson分布中取样, 这是一种高效的常规, 所有机器学习和数据分析软件包的船舶都使用这一常规。 这些特征,连同其离散性质和竞争性的隐私- 准确性交易, 使得它成为新引入的离子高斯机制的有吸引力的替代方法。