Federated learning (FL) is a promising and powerful approach for training deep learning models without sharing the raw data of clients. During the training process of FL, the central server and distributed clients need to exchange a vast amount of model information periodically. To address the challenge of communication-intensive training, we propose a new training method, referred to as federated learning with dual-side low-rank compression (FedDLR), where the deep learning model is compressed via low-rank approximations at both the server and client sides. The proposed FedDLR not only reduces the communication overhead during the training stage but also directly generates a compact model to speed up the inference process. We shall provide convergence analysis, investigate the influence of the key parameters, and empirically show that FedDLR outperforms the state-of-the-art solutions in terms of both the communication and computation efficiency.
翻译:联邦学习(FL)是培训深层次学习模式的有希望和有力的方法,不分享客户的原始数据。在FL培训过程中,中央服务器和分布式客户需要定期交流大量模型信息。为了应对通信密集型培训的挑战,我们提出了一种新的培训方法,称为“双向低级联合学习”(FedDLR),即深层次学习模式通过服务器和客户双方的低端近似压缩。拟议的FedDLR不仅减少了培训阶段的通信间接费用,而且直接生成了加快推断过程的紧凑模型。我们将提供趋同分析,调查关键参数的影响,并用经验显示FedDLRR在通信和计算效率方面都超越了最先进的解决方案。