Personalized Federated Learning (FL) is an emerging research field in FL that learns an easily adaptable global model in the presence of data heterogeneity among clients. However, one of the main challenges for personalized FL is the heavy reliance on clients' computing resources to calculate higher-order gradients since client data is segregated from the server to ensure privacy. To resolve this, we focus on a problem setting where the server may possess its own data independent of clients' data -- a prevalent problem setting in various applications, yet relatively unexplored in existing literature. Specifically, we propose FedSIM, a new method for personalized FL that actively utilizes such server data to improve meta-gradient calculation in the server for increased personalization performance. Experimentally, we demonstrate through various benchmarks and ablations that FedSIM is superior to existing methods in terms of accuracy, more computationally efficient by calculating the full meta-gradients in the server, and converges up to 34.2% faster.
翻译:个人化联邦学习(FL)是FL中一个新兴的研究领域,在客户数据差异的情况下学习了一个容易调整的全球模型,然而,个性化FL的主要挑战之一是,由于客户数据与服务器分离以确保隐私,因此大量依赖客户计算高阶梯度,因为客户数据与服务器分离以确保隐私。为了解决这个问题,我们把重点放在一个问题设置上,服务器可能拥有独立于客户数据的数据,这是各种应用程序中普遍存在的一个问题,但在现有文献中相对尚未探讨。具体地说,我们提议FDSIM,这是个性化FL的新方法,积极利用这种服务器数据改进服务器的元化计算,以提高个性化性化性化性化性能。我们实验地通过各种基准和推算表明,FDSIM在准确性方面优于现有方法,通过计算服务器的全元化数据而提高计算效率,并更快达到34.2%。