Since data is presented long-tailed in reality, it is challenging for Federated Learning (FL) to train across decentralized clients as practical applications. We present Global-Regularized Personalization (GRP-FED) to tackle the data imbalanced issue by considering a single global model and multiple local models for each client. With adaptive aggregation, the global model treats multiple clients fairly and mitigates the global long-tailed issue. Each local model is learned from the local data and aligns with its distribution for customization. To prevent the local model from just overfitting, GRP-FED applies an adversarial discriminator to regularize between the learned global-local features. Extensive results show that our GRP-FED improves under both global and local scenarios on real-world MIT-BIH and synthesis CIFAR-10 datasets, achieving comparable performance and addressing client imbalance.
翻译:由于数据在现实中是长效的,因此联邦学习联合会(FL)很难将分散客户的培训作为实际应用,我们提出全球标准化个人化(GRP-FED),通过考虑为每个客户提供单一的全球模型和多重当地模型来解决数据不平衡问题。通过适应性汇总,全球模型公平对待多个客户,减轻全球长效问题。每个地方模型都从当地数据中学习,并与地方数据分配相协调,以适应用户需要。为了防止本地模型过于适应,GRP-FED运用对抗性歧视器规范全球和地方的学习特征。广泛的结果表明,我们GRP-FED在全球和当地情景中都改善了现实世界的MIT-BIH和综合的CIFAR-10数据集,实现了可比较的绩效,并解决客户失衡问题。