Gradient inversion attacks on federated learning systems reconstruct client training data from exchanged gradient information. To defend against such attacks, a variety of defense mechanisms were proposed. However, they usually lead to an unacceptable trade-off between privacy and model utility. Recent observations suggest that dropout could mitigate gradient leakage and improve model utility if added to neural networks. Unfortunately, this phenomenon has not been systematically researched yet. In this work, we thoroughly analyze the effect of dropout on iterative gradient inversion attacks. We find that state of the art attacks are not able to reconstruct the client data due to the stochasticity induced by dropout during model training. Nonetheless, we argue that dropout does not offer reliable protection if the dropout induced stochasticity is adequately modeled during attack optimization. Consequently, we propose a novel Dropout Inversion Attack (DIA) that jointly optimizes for client data and dropout masks to approximate the stochastic client model. We conduct an extensive systematic evaluation of our attack on four seminal model architectures and three image classification datasets of increasing complexity. We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity. Our work demonstrates that privacy inducing changes to model architectures alone cannot be assumed to reliably protect from gradient leakage and therefore should be combined with complementary defense mechanisms.
翻译:对联邦学习系统发动的大规模反向攻击,从交换的梯度信息中重建客户培训数据。为了防范这种攻击,提出了各种防御机制。然而,它们通常导致隐私和模型效用之间的不可接受权衡取舍。最近的一些观察表明,辍学可以减少梯度渗漏,如果添加到神经网络中,则模型效用会得到改善。不幸的是,这一现象还没有系统地研究。在这项工作中,我们彻底分析了辍学对迭代梯度反向攻击的影响。我们发现,由于在模型训练期间辍学造成的混乱,最新攻击状态无法重建客户数据。然而,我们认为,如果在攻击最优化期间,辍学诱发的随机性充分模型,则辍学不会提供可靠的保护。因此,我们建议采用新的辍学攻击(DIA),共同优化客户数据和辍学面罩,以近似随机客户模式的客户模型。我们广泛系统地评估了对四个半梯度模型结构的袭击,三个图像分类数据集越来越复杂。我们发现,我们提议的攻击没有能够绕过似乎由辍学引发的模型的保护,因此,我们没有提供可靠的保护客户数据,因此不能以高度的保密性来证明我们的安全性结构的升级。