Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated learning, whereby malicious eavesdroppers or participants in the protocol can recover (partially) the clients' private data. This paper evaluates existing attacks and defenses. We find that some attacks make strong assumptions about the setup. Relaxing such assumptions can substantially weaken these attacks. We then evaluate the benefits of three proposed defense mechanisms against gradient inversion attacks. We show the trade-offs of privacy leakage and data utility of these defense methods, and find that combining them in an appropriate manner makes the attack less effective, even under the original strong assumptions. We also estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss, as summarized in a list of potential strategies. Our code is available at: https://github.com/Princeton-SysML/GradAttack.
翻译:恶意窃听者或协议参与者可以(部分)收回客户的私人数据。本文评估了现有的攻击和防御。我们发现,一些攻击对攻击的设置提出了强有力的假设。放松这些假设可以大大削弱这些攻击。然后我们评估三种拟议防御机制对梯度反攻击的好处。我们展示了隐私泄漏的权衡和这些防御方法的数据效用,发现这些方法以适当的方式使攻击变得不那么有效,即使在最初的强势假设下也是如此。我们还估算了在每个经过评估的防御下从终端到终端恢复单一图像的计算成本。我们的调查结果表明,目前,最先进的攻击可以用潜在战略清单中概述的少量数据效用损失来进行辩护。我们的代码可以在https://github.com/Prisonton-SysML/GradAtack上查到。