Federated learning (FL) literature typically assumes that each client has a fixed amount of data, which is unrealistic in many practical applications. Some recent works introduced a framework for online FL (Online-Fed) wherein clients perform model learning on streaming data and communicate the model to the server; however, they do not address the associated communication overhead. As a solution, this paper presents a partial-sharing-based online federated learning framework (PSO-Fed) that enables clients to update their local models using continuous streaming data and share only portions of those updated models with the server. During a global iteration of PSO-Fed, non-participant clients have the privilege to update their local models with new data. Here, we consider a global task of kernel regression, where clients use a random Fourier features-based kernel LMS on their data for local learning. We examine the mean convergence of the PSO-Fed for kernel regression. Experimental results show that PSO-Fed can achieve competitive performance with a significantly lower communication overhead than Online-Fed.
翻译:联邦学习(FL)文献通常假定每个客户都有固定数量的数据,这在许多实际应用中是不切实际的。最近的一些著作引入了一个在线FL(在线Fed)框架,客户通过这个框架对流数据进行模型学习,并将模型传送到服务器;然而,它们并没有解决相关的通信间接费用。作为一个解决办法,本文件提出了一个部分共享的在线联合学习框架(PSO-Fed),使客户能够使用连续流数据更新本地模型,并且只与服务器分享这些更新模型的一部分。在PSO-Fed、非参与者客户的全球循环中,他们有权用新数据更新其本地模型。这里,我们考虑的是内核回归的全球任务,客户在本地学习数据上使用一个随机的基于Fourier地心内核流LMS。我们研究了用于内核回归的PSO-Fed平均组合。实验结果显示,PSO-Fed能够以比在线Fed低得多的通信间接费用实现竞争性业绩。