In recent years, personalized federated learning (pFL) has attracted increasing attention for its potential in dealing with statistical heterogeneity among clients. However, the state-of-the-art pFL methods rely on model parameters aggregation at the server side, which require all models to have the same structure and size, and thus limits the application for more heterogeneous scenarios. To deal with such model constraints, we exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients. Specifically, we formulate the aggregation procedure in original pFL into a personalized group knowledge transfer training algorithm, namely, KT-pFL, which enables each client to maintain a personalized soft prediction at the server side to guide the others' local training. KT-pFL updates the personalized soft prediction of each client by a linear combination of all local soft predictions using a knowledge coefficient matrix, which can adaptively reinforce the collaboration among clients who own similar data distribution. Furthermore, to quantify the contributions of each client to others' personalized training, the knowledge coefficient matrix is parameterized so that it can be trained simultaneously with the models. The knowledge coefficient matrix and the model parameters are alternatively updated in each round following the gradient descent way. Extensive experiments on various datasets (EMNIST, Fashion\_MNIST, CIFAR-10) are conducted under different settings (heterogeneous models and data distributions). It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training via parameterized group knowledge transfer while achieving significant performance gain comparing with state-of-the-art algorithms.
翻译:近年来,个性化联邦学习(pFL)在应对客户之间统计差异方面的潜力引起了越来越多的关注,然而,最先进的pFL方法依赖服务器方面的模型参数汇总,要求所有模型的结构和大小相同,从而限制对更多不同情景的应用。为了应对这些模型的限制,我们利用不同模型设置的潜力,并提议一个新的培训框架,为不同客户采用个性化模型。具体地说,我们将原始的pFL综合程序纳入个性化群体知识转让算法,即KT-pFL,使每个客户都能在服务器方面保持个性化软预测,以指导其他客户的本地培训。KT-pFLL利用所有本地软预测的线性组合,更新每个客户的个性化软性化预测,这可以适应性化地加强拥有类似数据分布的客户之间的合作。此外,为了量化每个客户对他人个性化培训的贡献,知识系数矩阵是参数化的,以便每个客户都能同时在服务器方面进行个性化的软性化软性化预测,而每个客户则在模型下比较个人学习模式下,而每个数据库的基数矩阵是基数矩阵。