In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy. LDP provides client-side privacy by adding noise at the user's end. Thus, clients need not rely on the trustworthiness of the aggregator. In this work, we provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP, conditioned on locally perturbed observations. Stronger privacy protection (compared to the central model) provided by LDP protocols comes at a much harsher privacy-utility trade-off. Our framework tackles several computational and statistical challenges posed by LDP for accurate uncertainty quantification under Bayesian settings. We demonstrate the efficacy of our framework in parameter estimation for univariate and multi-variate distributions as well as logistic and linear regression.
翻译:近些年来,当地差异隐私(LDP)在分类器不可信的情况下,在几种情况中成为了隐私保护数据收集的一种选择方法。LDP通过在用户端添加噪音来提供客户隐私。因此,客户不需要依赖聚合器的可信度。在这项工作中,我们提供了一个有噪音意识的概率模型框架,使Bayesian的推论能够考虑到根据本地扰动观察增加的隐私噪音。LDP协议提供的更强的隐私保护(与中央模型相比)将带来更严厉的隐私使用权交易。我们的框架解决了LDP为在巴伊西亚环境中准确计算不确定性而提出的若干计算和统计挑战。我们展示了我们框架在单体和多变式分布参数估算以及后勤和线性回归方面的功效。