Federated learning (FL) is a collaborative learning paradigm for decentralized private data from mobile terminals (MTs). However, it suffers from issues in terms of communication, resource of MTs, and privacy. Existing privacy-preserving FL methods usually adopt the instance-level differential privacy (DP), which provides a rigorous privacy guarantee but with several bottlenecks: severe performance degradation, transmission overhead, and resource constraints of edge devices such as MTs. To overcome these drawbacks, we propose Fed-LTP, an efficient and privacy-enhanced FL framework with \underline{\textbf{L}}ottery \underline{\textbf{T}}icket \underline{\textbf{H}}ypothesis (LTH) and zero-concentrated D\underline{\textbf{P}} (zCDP). It generates a pruned global model on the server side and conducts sparse-to-sparse training from scratch with zCDP on the client side. On the server side, two pruning schemes are proposed: (i) the weight-based pruning (LTH) determines the pruned global model structure; (ii) the iterative pruning further shrinks the size of the pruned model's parameters. Meanwhile, the performance of Fed-LTP is also boosted via model validation based on the Laplace mechanism. On the client side, we use sparse-to-sparse training to solve the resource-constraints issue and provide tighter privacy analysis to reduce the privacy budget. We evaluate the effectiveness of Fed-LTP on several real-world datasets in both independent and identically distributed (IID) and non-IID settings. The results clearly confirm the superiority of Fed-LTP over state-of-the-art (SOTA) methods in communication, computation, and memory efficiencies while realizing a better utility-privacy trade-off.
翻译:联邦学习(FL)是一种协作学习范例,用于从移动终端(MT)收集分散的私有数据。 然而,它在通信、MT的资源和隐私方面存在问题。现有的保护隐私FL方法通常采用实例级差分隐私(DP),它提供了严格的隐私保证,但存在严重的性能下降、传输开销和边缘设备(如MT)的资源限制等瓶颈问题。为了克服这些缺点,我们提出Fed-LTP,一种使用彩票修剪和零-集中差分隐私(zCDP)的高效和隐私增强的FL框架。它在服务器端生成一个修剪的全局模型,并在客户端使用zCDP进行从头开始的稀疏和稀疏训练。在服务器端,提出了两种修剪方案:(i)基于权重的修剪(LTH)确定修剪的全局模型结构;(ii)迭代修剪进一步缩小修剪模型的参数大小。同时,通过基于拉普拉斯机制的模型验证,还提高了Fed-LTP的性能。在客户端,我们使用稀疏到稀疏的训练解决资源限制问题,并提供更紧密的隐私分析来减少隐私预算。我们在几个现实世界的数据集中,包括独立和同分布(IID)和非IID设置下评估了Fed-LTP的有效性。结果清楚地证实Fed-LTP相对于最佳现有方法在通信、计算和存储效率方面具有优势,同时实现了更好的效用-隐私权衡。