Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing. However, heterogeneous data problem, as one of FL's main problems, makes it difficult for the global model to perform effectively on each client's local data. Thus, personalized federated learning (PFL for simplification) aims to improve the performance of the model on local data as much as possible. Bayesian learning, where the parameters of the model are seen as random variables with a prior assumption, is a feasible solution to the heterogeneous data problem due to the tendency that the more local data the model use, the more it focuses on the local data, otherwise focuses on the prior. When Bayesian learning is applied to PFL, the global model provides global knowledge as a prior to the local training process. In this paper, we employ Bayesian learning to model PFL by assuming a prior in the scaled exponential family, and therefore propose pFedBreD, a framework to solve the problem we model using Bregman divergence regularization. Empirically, our experiments show that, under the prior assumption of the spherical Gaussian and the first order strategy of mean selection, our proposal significantly outcompetes other PFL algorithms on multiple public benchmarks.
翻译:联邦学习(FL促进简化)是一种分散的机器学习技术,它利用全球服务器和协作客户实现不直接分享数据而保护隐私的全球模型培训,然而,作为FL的主要问题之一,各种数据问题使全球模型难以有效地利用每个客户的当地数据。因此,个性化联邦学习(PFL促进简化)旨在尽可能提高当地数据模型的性能。巴伊西亚学习,模型的参数被视为随机变量,并事先假定,是解决不同数据问题的可行办法,因为模型使用的地方数据越多,就越侧重于当地数据,否则则侧重于以前的数据。当Bayesian学习应用PFL时,全球模型提供全球知识,作为当地培训过程之前的一种。在本文件中,我们利用Bayesian学习模型模型的模型,假设在比例指数式家庭中有一个先期,因此提出PFedBretes,一个框架,用来解决我们模型使用Bregman差异校正化的方法解决问题。Empiric,我们的实验显示,在以前假设的多级标准下,在前的GAL基准下,我们大幅选择了其他的GAL战略。