Data heterogeneity across clients in federated learning (FL) settings is a widely acknowledged challenge. In response, personalized federated learning (PFL) emerged as a framework to curate local models for clients' tasks. In PFL, a common strategy is to develop local and global models jointly - the global model (for generalization) informs the local models, and the local models (for personalization) are aggregated to update the global model. A key observation is that if we can improve the generalization ability of local models, then we can improve the generalization of global models, which in turn builds better personalized models. In this work, we consider class imbalance, an overlooked type of data heterogeneity, in the classification setting. We propose FedNH, a novel method that improves the local models' performance for both personalization and generalization by combining the uniformity and semantics of class prototypes. FedNH initially distributes class prototypes uniformly in the latent space and smoothly infuses the class semantics into class prototypes. We show that imposing uniformity helps to combat prototype collapse while infusing class semantics improves local models. Extensive experiments were conducted on popular classification datasets under the cross-device setting. Our results demonstrate the effectiveness and stability of our method over recent works.
翻译:在联合学习(FL)环境中,不同客户的数据差异是一个广泛公认的挑战。作为回应,个性化联合学习(PFL)成为了为客户任务制定本地模式的框架。在PFL中,一个共同战略是共同开发本地和全球模型――全球模型(通用模式)为本地模型提供信息,而本地模型(个性化模式)为更新全球模型而汇总。一个关键观察是,如果我们能够提高本地模型的普及能力,那么我们就可以改进全球模型的通用化,进而建立更好的个性化模型。在这项工作中,我们认为阶级失衡是一种被忽视的数据异质性,在分类设置中是一种被忽视的数据异质类型。我们提出FFedNHA,这是一种创新方法,通过将班级原型的统一和语义化结合起来,提高本地模型的个性和普遍性。FedNHS最初将班级原型原型在潜空空间上统一分布,并顺利地将类定型结构转换成类原型。我们显示,在使用类级定型模型的同时,会有助于防止原型的原型崩溃。我们最新的类定型结构中,我们提出了新的稳定性模型的实验。