This work studies anomaly detection under differential privacy with Gaussian perturbation using both statistical and information-theoretic tools. In our setting, the adversary aims to modify the differentially private information of a statistical dataset by inserting additional data without being detected by using the differential privacy to her/his own benefit. To this end, firstly via hypothesis testing, we characterize a statistical threshold for the adversary, which balances the privacy budget and the induced bias (the impact of the attack) in order to remain undetected. In addition, we establish the privacy-distortion tradeoff in the sense of the well-known rate-distortion function for the Gaussian mechanism by using an information-theoretic approach and present an upper bound on the variance of the attacker's additional data as a function of the sensitivity and the original data's second-order statistics. Lastly, we introduce a new privacy metric based on Chernoff information for classifying adversaries under differential privacy as a stronger alternative for the Gaussian mechanism. Analytical results are supported by numerical evaluations.
翻译:这项工作利用统计和信息理论工具,在高森扰动不同隐私下研究与高森扰动不同隐私下异常现象的检测。 在我们的环境下,对手的目的是通过插入额外数据来修改统计数据集的有差别的私人信息,而无需通过使用有差别的隐私为她/他自己的利益而探测出来。为此目的,首先通过假设测试,我们给对手定了一个统计门槛,以平衡隐私预算和诱发的偏见(攻击的影响),从而保持不被发现。此外,我们还从高斯机制众所周知的率扭曲功能的角度,通过使用信息理论方法,建立了隐私扭曲权衡,并提出了攻击者额外数据差异的上限,作为敏感度和原始数据第二顺序统计的函数。最后,我们根据Cernoff信息推出一个新的隐私度指标,用于在差异隐私权下对对手进行分类,作为高斯机制的更强有力的替代方法。分析结果得到数字评估的支持。