While federated learning traditionally aims to train a single global model across decentralized local datasets, one model may not always be ideal for all participating clients. Here we propose an alternative, where each client only federates with other relevant clients to obtain a stronger model per client-specific objectives. To achieve this personalization, rather than computing a single model average with constant weights for the entire federation as in traditional FL, we efficiently calculate optimal weighted model combinations for each client, based on figuring out how much a client can benefit from another's model. We do not assume knowledge of any underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest, enabling greater flexibility for personalization. We evaluate and characterize our method on a variety of federated settings, datasets, and degrees of local data heterogeneity. Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
翻译:虽然联谊学习传统上的目的是在分散的地方数据集中训练单一的全球模式,但一种模式不一定是所有参与的客户的理想模式。在这里,我们提出一种替代办法,即每个客户只与其他有关客户结成联盟,以获得一个更强有力的模式,每个客户都有客户特定的目标。为了实现这种个性化,而不是计算一个单一模式的平均数,使整个联邦与传统的FL一样具有恒定的权重,我们根据了解客户能够从另一个客户的模型中受益多少,为每个客户有效地计算最佳加权模式组合。我们不假定任何基本数据分配或客户相似之处的知识,让每个客户都能够优化任意的目标分配利益,使个人化具有更大的灵活性。我们评估并描述我们的方法是不同的联邦环境、数据集和本地数据差异度。我们的方法比现有的替代方法要优于现有的替代方法,同时让个人化的FL具有新的特征,例如从本地数据分配之外传输。