Knowledge sharing and model personalization are essential components to tackle the non-IID challenge in federated learning (FL). Most existing FL methods focus on two extremes: 1) to learn a shared model to serve all clients with non-IID data, and 2) to learn personalized models for each client, namely personalized FL. There is a trade-off solution, namely clustered FL or cluster-wise personalized FL, which aims to cluster similar clients into one cluster, and then learn a shared model for all clients within a cluster. This paper is to revisit the research of clustered FL by formulating them into a bi-level optimization framework that could unify existing methods. We propose a new theoretical analysis framework to prove the convergence by considering the clusterability among clients. In addition, we embody this framework in an algorithm, named Weighted Clustered Federated Learning (WeCFL). Empirical analysis verifies the theoretical results and demonstrates the effectiveness of the proposed WeCFL under the proposed cluster-wise non-IID settings.
翻译:知识分享和模式个性化是应对联合学习(FL)中非国际开发的挑战的关键组成部分。大多数现有FL方法侧重于两个极端:(1) 学习一个共同模式,为所有拥有非国际开发数据客户服务,(2) 学习每个客户的个人化模式,即个性化FL。 存在着一种权衡解决办法,即集群FL或集群化个人化FL,目的是将类似的客户分组成一个集群,然后学习一个集群内所有客户的共同模式。本文件将重新审视对集群FL的研究,将其编成一个双级优化框架,以统一现有方法。我们提出一个新的理论分析框架,通过考虑客户之间的集群性来证明这种趋同。此外,我们将这一框架纳入一个算法,称为“加权组合联邦学习”(WECFL)。 经验分析验证了理论结果,并展示了拟议在拟议的集群非IID环境中的拟议的WCFL的有效性。