Federated Learning (FL) is a collaborative learning framework that enables edge devices to collaboratively learn a global model while keeping raw data locally. Although FL avoids leaking direct information from local datasets, sensitive information can still be inferred from the shared models. To address the privacy issue in FL, differential privacy (DP) mechanisms are leveraged to provide formal privacy guarantee. However, when deploying FL at the wireless edge with over-the-air computation, ensuring client-level DP faces significant challenges. In this paper, we propose a novel wireless FL scheme called private federated edge learning with sparsification (PFELS) to provide client-level DP guarantee with intrinsic channel noise while reducing communication and energy overhead and improving model accuracy. The key idea of PFELS is for each device to first compress its model update and then adaptively design the transmit power of the compressed model update according to the wireless channel status without any artificial noise addition. We provide a privacy analysis for PFELS and prove the convergence of PFELS under general non-convex and non-IID settings. Experimental results show that compared with prior work, PFELS can improve the accuracy with the same DP guarantee and save communication and energy costs simultaneously.
翻译:联邦学习 (FL) 是一种协作学习框架,可以使边缘设备在将原始数据保留本地的同时协作地学习全局模型。尽管FL避免了泄露本地数据的直接信息,但仍可能从共享的模型中推断出敏感信息。为了解决FL中的隐私问题,差分隐私 (DP) 机制被利用来提供正式的隐私保证。然而,在使用无线通信进行过空中计算的无线边缘环境中部署FL时,确保客户端级别的DP面临着重大挑战。在本文中,我们提出了一种称为具有稀疏化的私有联邦边缘学习 (PFELS) 的新型无线FL方案,该方案可在提高模型精度的同时,并减少通信和能源成本。PFELS的关键思想是,每个设备首先压缩其模型更新,然后根据无线通信信道状态适应性地设计压缩后的模型更新的发射功率,而不添加任何人为的噪音。我们为PFELS提供隐私分析,并证明了PFELS在一般的非凸和非IID设置下的收敛性。实验结果表明,与先前的工作相比,PFELS可以在提供相同DP保证的同时提高模型的精度并同时节省通信和能量成本。