Malicious attackers and an honest-but-curious server can steal private client data from uploaded gradients in federated learning. Although current protection methods (e.g., additive homomorphic cryptosystem) can guarantee the security of the federated learning system, they bring additional computation and communication costs. To mitigate the cost, we propose the \texttt{FedAGE} framework, which enables the server to aggregate gradients in an encoded domain without accessing raw gradients of any single client. Thus, \texttt{FedAGE} can prevent the curious server from gradient stealing while maintaining the same prediction performance without additional communication costs. Furthermore, we theoretically prove that the proposed encoding-decoding framework is a Gaussian mechanism for differential privacy. Finally, we evaluate \texttt{FedAGE} under several federated settings, and the results have demonstrated the efficacy of the proposed framework.
翻译:恶意攻击者和诚实但有疑心的服务器可以在联合学习中从上传梯度中窃取私人客户数据。 虽然当前的保护方法(例如添加同质加密系统)可以保障联合学习系统的安全, 但它们带来了额外的计算和通信成本。 为了降低成本, 我们提议了 \ textt{FedAGE} 框架, 使服务器能够在编码域中汇总梯度, 而不访问任何单一客户的原始梯度 。 因此, \ textt{FedAGE} 能够防止好奇服务器在保持相同的预测性能的同时不增加通信成本。 此外, 我们理论上证明, 提议的编码编码编码框架是高斯的差别隐私机制。 最后, 我们在若干联邦环境下评估了\ textt{FedAGED}, 结果证明了拟议框架的有效性 。