Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices. It inherently assumes a uniform capacity among participants. However, participants have diverse computational resources in practice due to different conditions such as different energy budgets or executing parallel unrelated tasks. It is necessary to reduce the computation overhead for participants with inefficient computational resources, otherwise they would be unable to finish the full training process. To address the computation heterogeneity, in this paper we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Learning (CCFL), which allows each participant to determine whether to perform conventional local training or model estimation in each round based on its current computational resources. Both theoretical analysis and exhaustive experiments indicate that CCFL has the same convergence rate as FedAvg without resource constraints. Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg that retains model performance while considerably reducing computation overhead.
翻译:联邦学习(FL)是一种培训模式的方法,它使用许多参与者(如IoT装置)的分布数据来培训模型,它本身就假定参与者具有统一的能力;然而,由于不同的能源预算或执行平行的不相关任务等不同条件,参与者实际上拥有不同的计算资源;有必要减少计算资源效率低的参与者的计算间接费用,否则他们将无法完成全部培训过程;为了解决计算差异问题,我们在本文件中提出一项战略,在不进行计算密集的迭代的情况下估算当地模型;在此基础上,我们提议计算定制联邦学习(CCFL),使每个参与者能够根据其目前的计算资源确定是否进行常规的当地培训或每轮的模型估算;理论分析和详尽实验都表明,CFDAvg的趋同率与FedAvg相同,没有资源限制;此外,CFDAvg的计算效率扩展可以视为FDAvg的计算效率,保留模型性能,同时大量减少计算间接费用。