Federated learning synchronizes models through gradient transmission and aggregation. However, these gradients pose significant privacy risks, as sensitive training data is embedded within them. Existing gradient inversion attacks suffer from significantly degraded reconstruction performance when gradients are perturbed by noise-a common defense mechanism. In this paper, we introduce gradient-guided conditional diffusion models for reconstructing private images from leaked gradients, without prior knowledge of the target data distribution. Our approach leverages the inherent denoising capability of diffusion models to circumvent the partial protection offered by noise perturbation, thereby improving attack performance under such defenses. We further provide a theoretical analysis of the reconstruction error bounds and the convergence properties of the attack loss, characterizing the impact of key factors-such as noise magnitude and attacked model architecture-on reconstruction quality. Extensive experiments demonstrate our attack's superior reconstruction performance with Gaussian noise-perturbed gradients, and confirm our theoretical findings.
翻译:联邦学习通过梯度传输与聚合实现模型同步。然而,这些梯度蕴含敏感训练数据,构成显著的隐私风险。现有梯度反演攻击在梯度被噪声扰动(一种常见防御机制)时,其重建性能会显著下降。本文提出一种梯度引导条件扩散模型,用于从泄露的梯度中重建私有图像,且无需目标数据分布的先验知识。该方法利用扩散模型固有的去噪能力,规避噪声扰动提供的部分保护,从而提升此类防御下的攻击性能。我们进一步对重建误差界和攻击损失收敛性进行理论分析,阐明了噪声幅度、被攻击模型架构等关键因素对重建质量的影响。大量实验表明,本攻击方法在高斯噪声扰动梯度下具有优越的重建性能,并验证了理论分析结果。