One underlying assumption of recent federated learning (FL) paradigms is that all local models usually share the same network architecture and size, which becomes impractical for devices with different hardware resources. A scalable federated learning framework should address the heterogeneity that clients have different computing capacities and communication capabilities. To this end, this paper proposes FedHM, a novel heterogeneous federated model compression framework, distributing the heterogeneous low-rank models to clients and then aggregating them into a full-rank model. Our solution enables the training of heterogeneous models with varying computational complexities and aggregates them into a single global model. Furthermore, FedHM significantly reduces the communication cost by using low-rank models. Extensive experimental results demonstrate that FedHM is superior in the performance and robustness of models of different sizes, compared with state-of-the-art heterogeneous FL methods under various FL settings. Additionally, the convergence guarantee of FL for heterogeneous devices is first theoretically analyzed.
翻译:最近联邦化学习模式的一个基本假设是,所有本地模式通常都拥有相同的网络架构和规模,对于不同硬件资源的设备来说,这种架构和规模变得不切实际。一个可扩缩的联邦化学习框架应该解决客户具有不同计算能力和通信能力的异质性。为此,本文件提出FedHM,一个新型的多元化联合模式压缩框架,向客户分配各种低级模式,然后将其整合为全级模式。我们的解决办法使得能够培训具有不同计算复杂性的多元模型,并将这些模型汇总成一个单一的全球模型。此外,美联储通过使用低级模型大大降低了通信成本。广泛的实验结果表明,美联储在不同的规模模型的性能和稳健性方面优于不同FL环境中的先进、最先进的混合的FL方法。此外,对多种设备FL的趋同保证首先进行了理论分析。