Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model by aggregating locally-computed models without exposing their raw data. While most of the existing work focuses on improving the FL model accuracy, in this paper, we focus on the improving the training efficiency, which is often a hurdle for adopting FL in real-world applications. Specifically, we design an efficient FL framework which jointly optimizes model accuracy, processing latency and communication efficiency, all of which are primary design considerations for real implementation of FL. Inspired by the recent success of Multi-Agent Reinforcement Learning (MARL) in solving complex control problems, we present \textit{FedMarl}, an MARL-based FL framework which performs efficient run-time client selection. Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.
翻译:联邦学习(FL)是一种培训技术,使客户设备能够联合学习共同模式,在不暴露原始数据的情况下汇集当地投入的模型。虽然大多数现有工作侧重于提高FL模型的准确性,但在本文件中,我们侧重于提高培训效率,这往往是在现实世界应用中采用FL的一个障碍。具体地说,我们设计了一个高效的FL框架,共同优化模型准确性、处理潜伏和通信效率,所有这些都是实际实施FL的主要设计考虑。 受多机构强化学习(MARL)最近在解决复杂控制问题方面取得的成功的启发,我们提出了基于MARL的FL框架,即一个高效运行时间客户选择框架。实验表明,FedMarl可以大幅提高模型准确性,而低得多的处理潜伏和通信成本。