Local differential privacy (LDP) gives a strong privacy guarantee to be used in a distributed setting like federated learning (FL). LDP mechanisms in FL protect a client's gradient by randomizing it on the client; however, how can we interpret the privacy level given by the randomization? Moreover, what types of attacks can we mitigate in practice? To answer these questions, we introduce an empirical privacy test by measuring the lower bounds of LDP. The privacy test estimates how an adversary predicts if a reported randomized gradient was crafted from a raw gradient $g_1$ or $g_2$. We then instantiate six adversaries in FL under LDP to measure empirical LDP at various attack surfaces, including a worst-case attack that reaches the theoretical upper bound of LDP. The empirical privacy test with the adversary instantiations enables us to interpret LDP more intuitively and discuss relaxation of the privacy parameter until a particular instantiated attack surfaces. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the worst-case attack is not realistic in FL. In the end, we also discuss the possible relaxation of privacy levels in FL under LDP.
翻译:本地差异隐私(LDP) 提供了很强的隐私保障, 可以在类似联合学习(FL)的分布环境中使用。 FL的LDP机制保护客户的梯度; 然而, 我们如何解释随机化带来的隐私水平? 此外, 我们在实践中可以减轻哪些类型的攻击? 为了回答这些问题, 我们引入了实证隐私测试, 测量LDP的下限; 隐私测试估计, 如果报告的随机梯度是从原始梯度 $g_ 1 或 $g_ 2 美元 上制成的, 对手会如何预测。 然后, 我们根据LDP 立即在 FLL 下将6个对手置于FL, 以测量各种攻击表面的经验性LDP, 包括达到LDP理论上限的最坏情况攻击 。 与对手的瞬间实验性隐私测试使我们能够更直截了当地解释LDP, 讨论隐私参数的放松问题, 直至特定的瞬间攻击表面。 我们还展示了测量这些对抗性环境隐私的数值观察, 而最坏的进攻在FL DP 下是不现实的。