We present a fairly general framework for reducing $(\varepsilon, \delta)$ differentially private (DP) statistical estimation to its non-private counterpart. As the main application of this framework, we give a polynomial time and $(\varepsilon,\delta)$-DP algorithm for learning (unrestricted) Gaussian distributions in $\mathbb{R}^d$. The sample complexity of our approach for learning the Gaussian up to total variation distance $\alpha$ is $\widetilde{O}\left(\frac{d^2}{\alpha^2}+\frac{d^2 \sqrt{\ln{1/\delta}}}{\alpha\varepsilon} \right)$, matching (up to logarithmic factors) the best known information-theoretic (non-efficient) sample complexity upper bound of Aden-Ali, Ashtiani, Kamath~(ALT'21). In an independent work, Kamath, Mouzakis, Singhal, Steinke, and Ullman~(arXiv:2111.04609) proved a similar result using a different approach and with $O(d^{5/2})$ sample complexity dependence on $d$. As another application of our framework, we provide the first polynomial time $(\varepsilon, \delta)$-DP algorithm for robust learning of (unrestricted) Gaussians.
翻译:我们提出了一个相当一般的框架,用于降低美元(varepsilon,\delta)的私人(DP)差异统计估算。作为这个框架的主要应用,我们给出了一个多元时间和美元(varepsilon,\delta)-DP的学习算法(不限制)Gaussian分配($mathbb{R ⁇ d$)。我们学习Gaussian直至总变差距离($)的样本复杂性是全方位的(DP)($)left($frac{d_%2_BAR_alpha_2}frac{d_2\\\sqrt=1\\\\\\\\\\\\delepal_alpret}\right),匹配(最高已知的信息-理论(非效率)样本复杂性为Aden-Ali、Ashtiani、Kamath-(ALth_(ALT)21。在一项独立工作中,Kamath、Moukial、Singal、Singal、Singkelex=Ols 和Uralxxxxxxxxxxxxxxxx 的(结果),用我们不同的学习框架的另一种测试结果(美元)。