We investigate unbiased high-dimensional mean estimators in differential privacy. We consider differentially private mechanisms whose expected output equals the mean of the input dataset, for every dataset drawn from a fixed convex domain $K$ in $\mathbb{R}^d$. In the setting of concentrated differential privacy, we show that, for every input such an unbiased mean estimator introduces approximately at least as much error as a mechanism that adds Gaussian noise with a carefully chosen covariance. This is true when the error is measured with respect to $\ell_p$ error for any $p \ge 2$. We extend this result to local differential privacy, and to approximate differential privacy, but for the latter the error lower bound holds either for a dataset or for a neighboring dataset. We also extend our results to mechanisms that take i.i.d.~samples from a distribution over $K$ and are unbiased with respect to the mean of the distribution.
翻译:我们用不同的隐私来调查公正的高维平均估计值。 我们考虑不同的私人机制, 它们的预期输出等于输入数据集的平均值, 对于从固定的 convex 域中提取的每套数据集, $\ mathb{R ⁇ d$。 在集中的差别隐私设置中, 我们显示, 对于每一个输入, 如此公正的平均估计值, 其输入的误差至少相当于一个机制的误差, 增加高萨噪音, 并有精心选择的误差。 当用$\ell_ p$的误差来衡量任何$p ge 2 美元时, 误差就是如此。 我们把这个误差结果扩大到本地的偏差隐私和近似差的隐私, 但对于后者来说, 差差差是用于数据集或相邻数据集的。 我们还将我们的结果推广到 i. d. ~ 标本来自以美元计价的发行量, 并且对发行平均值是公正的。