This paper presents a new mechanism for producing sanitized statistical summaries that achieve \emph{differential privacy}, called the \emph{K-Norm Gradient} Mechanism, or KNG. This new approach maintains the strong flexibility of the exponential mechanism, while achieving the powerful utility performance of objective perturbation. KNG starts with an inherent objective function (often an empirical risk), and promotes summaries that are close to minimizing the objective by weighting according to how far the gradient of the objective function is from zero. Working with the gradient instead of the original objective function allows for additional flexibility as one can penalize using different norms. We show that, unlike the exponential mechanism, the noise added by KNG is asymptotically negligible compared to the statistical error for many problems. In addition to theoretical guarantees on privacy and utility, we confirm the utility of KNG empirically in the settings of linear and quantile regression through simulations.
翻译:本文介绍了一种新机制,用于编制清洁统计摘要,实现\ emph{K- Norm Gradient} 机制或KNG。 这种新办法保持了指数机制的强大灵活性,同时实现了目标扰动的强大效用。 KNG从一个内在的客观功能(通常是经验风险)开始,并提倡根据目标函数的梯度从零到零的权重来尽量缩小目标的汇总。与梯度而不是最初的目标功能合作,可以增加灵活性,因为使用不同的规范来惩罚人。我们表明,与指数机制不同,KNG添加的噪音与许多问题的统计错误相比,微不足道。除了对隐私和效用的理论保障外,我们还确认KNG在通过模拟进行线性和孔地回归的情况下的经验作用。