Although local differential privacy (LDP) protects individual users' data from inference by an untrusted data curator, recent studies show that an attacker can launch a data poisoning attack from the user side to inject carefully-crafted bogus data into the LDP protocols in order to maximally skew the final estimate by the data curator. In this work, we further advance this knowledge by proposing a new fine-grained attack, which allows the attacker to fine-tune and simultaneously manipulate mean and variance estimations that are popular analytical tasks for many real-world applications. To accomplish this goal, the attack leverages the characteristics of LDP to inject fake data into the output domain of the local LDP instance. We call our attack the output poisoning attack (OPA). We observe a security-privacy consistency where a small privacy loss enhances the security of LDP, which contradicts the known security-privacy trade-off from prior work. We further study the consistency and reveal a more holistic view of the threat landscape of data poisoning attacks on LDP. We comprehensively evaluate our attack against a baseline attack that intuitively provides false input to LDP. The experimental results show that OPA outperforms the baseline on three real-world datasets. We also propose a novel defense method that can recover the result accuracy from polluted data collection and offer insight into the secure LDP design.
翻译:虽然当地差异隐私保护(LDP)保护个人用户的数据不受不受不受不信任的数据保管人不信任的数据保管人(LDP)的推断,但最近的研究表明,攻击者可以从用户方面发动数据中毒攻击,将精心制作的假数据输入LDP协议,以便最大限度地扭曲数据保管人的最后估计。在这项工作中,我们进一步推广这一知识,提出一种新的细微攻击,使攻击者能够微调并同时操纵作为许多现实世界应用的流行分析任务的平均值和差异估计。为实现这一目标,攻击者可以利用LDP的特点将假数据输入当地LDP的输出域。我们称我们攻击产出中毒攻击(OPA)是为了最大限度地扭曲数据最后估计。我们观察到安全隐私损失会加强LDP的安全性,这与已知的安全性-隐私交易与先前工作相矛盾。我们进一步研究了攻击者对数据中毒威胁性攻击的一贯性和更全面的看法。为了实现这一目标,我们全面评价了LDP的特征,将我们的基线攻击将假数据输入当地LDP的输出域域域域。我们称之为我们攻击的输出中毒攻击(OPA)的准确性攻击性攻击,我们称之为攻击性攻击性攻击性攻击性攻击性攻击(OPDP),也提出一个真实的精确的基线数据采集的数据收集。</s>