When applying differential privacy to sensitive data, a common way of getting improved performance is to use external information such as other sensitive data, public data, or human priors. We propose to use the algorithms with predictions framework -- previously applied largely to improve time complexity or competitive ratios -- as a powerful way of designing and analyzing privacy-preserving methods that can take advantage of such external information to improve utility. For four important tasks -- quantile release, its extension to multiple quantiles, covariance estimation, and data release -- we construct prediction-dependent differentially private methods whose utility scales with natural measures of prediction quality. The analyses enjoy several advantages, including minimal assumptions about the data, natural ways of adding robustness to noisy predictions, and novel "meta" algorithms that can learn predictions from other (potentially sensitive) data. Overall, our results demonstrate how to enable differentially private algorithms to make use of and learn noisy predictions, which holds great promise for improving utility while preserving privacy across a variety of tasks.
翻译:在对敏感数据适用不同的隐私时,改进性能的一个共同方法是使用外部信息,如其他敏感数据、公共数据或人类前科。我们提议使用预测框架的算法 -- -- 以前主要用于改进时间复杂性或竞争比率 -- -- 作为一种强有力的方法,设计和分析可以利用这些外部信息来改进效用的隐私保护方法。对于四项重要任务 -- -- 量化释放、将其推广至多个量化、共变估计和数据发布 -- -- 我们构建了依赖预测的不同私人方法,其尺度具有自然预测质量的实用尺度。这些分析具有若干优点,包括对数据的简单假设、在噪音预测中增加稳健性的自然方式以及能够从其他(潜在敏感)数据中学习预测的新型“元”算法。总体而言,我们的结果表明如何使差异性私人算法能够使用和学习噪音预测,这些预测在保护各种任务的隐私的同时,对改进效用有很大希望。