Federated learning (FL) aims to protect data privacy by enabling clients to collaboratively build machine learning models without sharing their private data. However, recent works demonstrate that FL is vulnerable to gradient-based data recovery attacks. Varieties of privacy-preserving technologies have been leveraged to further enhance the privacy of FL. Nonetheless, they either are computational or communication expensive (e.g., homomorphic encryption) or suffer from precision loss (e.g., differential privacy). In this work, we propose \textsc{FedCG}, a novel \underline{fed}erated learning method that leverages \underline{c}onditional \underline{g}enerative adversarial networks to achieve high-level privacy protection while still maintaining competitive model performance. More specifically, \textsc{FedCG} decomposes each client's local network into a private extractor and a public classifier and keeps the extractor local to protect privacy. Instead of exposing extractors which is the culprit of privacy leakage, \textsc{FedCG} shares clients' generators with the server for aggregating common knowledge aiming to enhance the performance of clients' local networks. Extensive experiments demonstrate that \textsc{FedCG} can achieve competitive model performance compared with baseline FL methods, and numerical privacy analysis shows that \textsc{FedCG} has high-level privacy-preserving capability.
翻译:联邦学习(FL)的目的是保护数据隐私,使客户能够在不分享私人数据的情况下合作建立机器学习模式,从而使其能够保护数据隐私。然而,最近的工作表明,FL很容易受到基于梯度的数据回收攻击;利用隐私保护技术的不同性来进一步提高FL的隐私。尽管如此,它们要么计算或通信费用昂贵(例如,同质加密),要么遭受精确损失(例如,有差异隐私)。在这项工作中,我们提议了\ textsc{FedC},这是一种新型的、底线{FedG}学习方法,利用线下线{c}线下线{线{线下}线上{gredline{g}线下{g}强烈的对抗性对抗性网络来实现高级别隐私保护,同时保持竞争性的模型性能。更具体地说,cextsc{FC}将每个客户的本地网络转换成私人提取器和公共分类,并保持提取器保护隐私。除了揭露作为隐私渗漏的罪魁祸首的提取器之外,新式的提取器, {FedG_Fleadal liveral liftal livesal livesal deal deal deal deal sal sal sal squistrual sqs as