Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing. But this approach shows vulnerabilities when privacy inference attacks are applied to it. In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture. The most alarming thing about this gradient leakage attack is that it can be performed in such a covert way that it does not hamper the training performance while the attackers backtrack from the gradients to get information about the raw data. Two of the most common approaches proposed as solutions to this issue are homomorphic encryption and adding noise with differential privacy parameters. These two approaches suffer from two major drawbacks. They are: the key generation process becomes tedious with the increasing number of clients, and noise-based differential privacy suffers from a significant drop in global model accuracy. As a countermeasure, we propose a mixed-precision quantized FL scheme, and we empirically show that both of the issues addressed above can be resolved. In addition, our approach can ensure more robustness as different layers of the deep model are quantized with different precision and quantization modes. We empirically proved the validity of our method with three benchmark datasets and found a minimal accuracy drop in the global model after applying quantization.
翻译:联邦学习组织(FL)能够让大量参与者在不需要明确数据共享的情况下建立协作模式,无需明确数据共享。但这一方法显示在对它适用隐私推断攻击时存在脆弱性。特别是,在梯度渗漏袭击,在从模型梯度中检索敏感数据的成功率较高的情况下,FL模型因其内在结构中存在通信而面临更高的风险。对于这种梯度渗漏袭击,最令人吃惊的是,它可以以一种不阻碍培训业绩的隐蔽方式进行,而攻击者则从梯度中返回轨道以获得原始数据信息。作为解决这一问题的两种最常见的方法是同质加密和增加有差异隐私参数的噪音。这两种方法有两大缺陷:关键生成过程随着客户数量的增加而变得乏味,以及噪音造成的隐私差异因全球模型准确性大幅下降而受到影响。作为一种反比尺度,我们建议采用混合精度量化的FL计划,我们从经验上表明,上述两个问题都可以在深度精确度和深度精确度之后都以不同的方式予以解决。此外,我们采用不同的精确度方法后,我们又能以更稳健的四倍的方式确保我们采用不同的方法。