Federated learning (FL) has prevailed as an efficient and privacy-preserved scheme for distributed learning. In this work, we mainly focus on the optimization of computation and communication in FL from a view of pruning. By adopting layer-wise pruning in local training and federated updating, we formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models. Two specific schemes of FedLP are designed for scenarios with homogeneous local models and heterogeneous ones. Both theoretical and experimental evaluations are developed to verify that FedLP relieves the system bottlenecks of communication and computation with marginal performance decay. To the best of our knowledge, FedLP is the first framework that formally introduces the layer-wise pruning into FL. Within the scope of federated learning, more variants and combinations can be further designed based on FedLP.
翻译:在这项工作中,我们主要侧重于优化FL的计算和通信。我们通过采用分层的本地培训和联合更新,制定了明确的FL运行框架,即FedLP(FedLP-Pruning),这是不同类型的深层学习模式的模型和普及性。FedLP的两个具体计划针对同一本地模型和不同模型的情景设计。我们开发了理论和实验性评估,以核实FedLP缓解了边际性能衰减的通信和计算瓶颈。根据我们的知识,FedLP是正式将分层运行引入FL的第一个框架。在Federation学习范围内,可以进一步根据FedLP设计更多的变式和组合。</s>