Many of the machine learning (ML) tasks are focused on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) leading to a huge communication overhead. Federated learning (FL) overcomes this issue by allowing the clients to send only the model updates to the PS instead of the whole dataset. In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side. This requirement may not always be satisfied because of diverse computational capabilities of edge devices. We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model by exploiting the computational capability of the clients. In HFCL, only the clients who have sufficient resources employ FL; the remaining clients resort to CL by transmitting their local dataset to PS. This allows all the clients to collaborate on the learning process regardless of their computational resources. We also propose a sequential data transmission approach with HFCL (HFCL-SDT) to reduce the training duration. The proposed HFCL frameworks outperform previously proposed non-hybrid FL (CL) based schemes in terms of learning accuracy (communication overhead) since all the clients collaborate on the learning process with their datasets regardless of their computational resources.
翻译:计算机学习(ML)的许多任务都集中在集中学习(CL)上,这要求将客户的本地数据集传输到参数服务器(PS)上,从而导致巨大的通信管理费用。联邦学习(FL)通过允许客户只向PS发送模式更新,而不是整个数据集,克服了这一问题。这样,FL将学习提高到边缘水平,其中客户需要强大的计算资源。由于边缘装置的计算能力各不相同,这一要求可能并不总是得到满足。我们通过一个新的混合联合和集中学习(HFCL)框架解决这个问题,以便利用客户的计算能力,有效培训学习模式。在HFL中,只有拥有足够资源的客户才使用FL;其余客户通过向PS传输本地数据集而诉诸CL。这使所有客户都能在学习过程中进行合作,而不管其计算资源如何计算。我们还提议与HFLFL(HCL-SDT)一起按顺序传送数据,以减少培训时间。拟议的HFFLL框架比原先提议的非节制的学习模式要超出他们学习基于直接数据(无论其通信系统)的所有通信客户学习计划。