We initiate the study of differentially private (DP) estimation with access to a small amount of public data. For private estimation of d-dimensional Gaussians, we assume that the public data comes from a Gaussian that may have vanishing similarity in total variation distance with the underlying Gaussian of the private data. We show that under the constraints of pure or concentrated DP, d+1 public data samples are sufficient to remove any dependence on the range parameters of the private data distribution from the private sample complexity, which is known to be otherwise necessary without public data. For separated Gaussian mixtures, we assume that the underlying public and private distributions are the same, and we consider two settings: (1) when given a dimension-independent amount of public data, the private sample complexity can be improved polynomially in terms of the number of mixture components, and any dependence on the range parameters of the distribution can be removed in the approximate DP case; (2) when given an amount of public data linear in the dimension, the private sample complexity can be made independent of range parameters even under concentrated DP, and additional improvements can be made to the overall sample complexity.
翻译:我们开创性地研究了在访问少量公共数据的情况下进行差分隐私(DP)估计。对于具有d个维度的高斯分布的私有估计,我们假定公共数据来自可能与私有数据分布中的基础高斯分布存在消失的总变异距离相似性的高斯分布。我们表明,在纯DP或集中DP的约束下,d+1个公共数据样本足以从私有的样本复杂度中消除对数据分布范围参数的依赖,否则需要使用公共数据。对于分离的高斯混合物,我们假定底层的公共和私有分布相同,并考虑两种情况:(1)当给定与维度无关的公共数据量时,可以将私有样本复杂度多项式地提高,关于混合组件的数量和任何依赖于分布范围参数的情况都可以在近似DP的情况下消除; (2)当给定一个线性纬度的公共数据量时,在集中DP的情况下,可以消除个人样本复杂度对范围参数的依赖,并可以对总体样本复杂度进行额外的改进.