We study Gaussian mechanism in the shuffle model of differential privacy (DP). Particularly, we characterize the mechanism's R\'enyi differential privacy (RDP), showing that it is of the form: $$ \epsilon(\lambda) \leq \frac{1}{\lambda-1}\log\left(\frac{e^{-\lambda/2\sigma^2}}{n^\lambda}\sum_{\substack{k_1+\dotsc+k_n=\lambda;\\k_1,\dotsc,k_n\geq 0}}\binom{\lambda}{k_1,\dotsc,k_n}e^{\sum_{i=1}^nk_i^2/2\sigma^2}\right) $$ We further prove that the RDP is strictly upper-bounded by the Gaussian RDP without shuffling. The shuffle Gaussian RDP is advantageous in composing multiple DP mechanisms, where we demonstrate its improvement over the state-of-the-art approximate DP composition theorems in privacy guarantees of the shuffle model. Moreover, we extend our study to the subsampled shuffle mechanism and the recently proposed shuffled check-in mechanism, which are protocols geared towards distributed/federated learning. Finally, an empirical study of these mechanisms is given to demonstrate the efficacy of employing shuffle Gaussian mechanism under the distributed learning framework to guarantee rigorous user privacy.
翻译:我们用不同的隐私模式(DP) 研究高斯安机制。 特别是, 我们描述这个机制的 R\\ enyi 差异隐私(RDP), 显示它的形式是: $\ epsilon(\ lambda)\ 1\\\ lambda-1\\ log\ lef(\\\\\ e\\\\\\\\\\\\\\ lambda/2\ sigma2\\\\\\ lambda\\ lambäsubstack{ k_ 1\\ dotsc+k_ n ⁇ lambda;\\ k_ dotsc, k_ bengeq= lamblambl_ k_ 1\ dotsc, k_ leqqq=qqqq\ lablambda\\\ k_ 1\\\\\\\\\\ \ \ \ \ \ \ \ \ \ labsc\ 1\ \ \ \\\\\ \ \ \ ladsc\ \ labsc\ \ labsc\ \ \ \ \ \ \ \\\\\\\ \ labsc\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\