Personalized Federated Learning (PFL) aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner. Existing PFL methods generally assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution. The joint problem of data heterogeneity and long-tail distribution in the FL environment is more challenging and severely affects the performance of personalized models. In this paper, we propose a PFL method called Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL. FedAFA optimizes the personalized model for each client by producing a balanced feature set to enhance the local minority classes. The local minority class features are generated by transferring the knowledge from the local majority class features extracted by the global model in an adversarial example learning manner. The experimental results on benchmarks under different settings of data heterogeneity and long-tail distribution demonstrate that FedAFA significantly improves the personalized performance of each client compared with the state-of-the-art PFL algorithm. The code is available at https://github.com/pxqian/FedAFA.
翻译:个性化联邦学习(PFL)旨在以隐私保护的方式基于所有客户端的知识为每个客户端学习个性化模型。现有的PFL方法一般假设跨越所有客户端的全局数据均匀分布,没有考虑长尾分布。在FL环境中,数据异质性和长尾分布的联合问题更具挑战性,严重影响个性化模型的性能。在本文中,我们提出了一种名为具有对抗性特征增强的联邦学习(FedAFA)的PFL方法,以解决PFL中的这个联合问题。 FedAFA通过生成平衡的特征集来优化每个客户端的个性化模型,以增强局部少数类。局部少数类特征是通过以对抗的方式从全局模型提取的局部多数类特征转移而来的。在数据异质性和长尾分布的不同设置下,基准实验的实验结果表明,与最优秀的PFL算法相比,FedAFA显着提高了每个客户端的个性化性能。代码可在 https://github.com/pxqian/FedAFA 上获得。