Locally differentially private (LDP) graph analysis allows private analysis on a graph that is distributed across multiple users. However, such computations are vulnerable to data poisoning attacks where an adversary can skew the results by submitting malformed data. In this paper, we formally study the impact of poisoning attacks for graph degree estimation protocols under LDP. We make two key technical contributions. First, we observe LDP makes a protocol more vulnerable to poisoning -- the impact of poisoning is worse when the adversary can directly poison their (noisy) responses, rather than their input data. Second, we observe that graph data is naturally redundant -- every edge is shared between two users. Leveraging this data redundancy, we design robust degree estimation protocols under LDP that can significantly reduce the impact of data poisoning and compute degree estimates with high accuracy. We evaluate our proposed robust degree estimation protocols under poisoning attacks on real-world datasets to demonstrate their efficacy in practice.
翻译:局部有差别的私人(LDP)图表分析允许对分布在多个用户之间的图表进行私人分析。然而,这种计算很容易受到数据中毒攻击的影响,因为对手可以通过提交错误的数据来扭曲结果。在本文中,我们正式研究中毒攻击的影响,以便根据LDP进行图形度估计协议。我们做出了两项重要的技术贡献。首先,我们观察LDP使得协议更容易中毒 -- -- 当对手直接毒害其(噪音)反应,而不是输入数据时,中毒的影响就更严重了。第二,我们观察到图表数据是自然多余的 -- -- 两个用户之间分享每一个边缘。利用这一数据冗余,我们根据LDP设计了强度估计协议,可以大大降低数据中毒的影响,并以很高的精确度计算估计。我们评估了在真实世界数据集的中毒攻击下拟议强度估计协议,以表明其实际效果。