Federated learning allows multiple clients to collaboratively train a model without exchanging their data, thus preserving data privacy. Unfortunately, it suffers significant performance degradation under heterogeneous data at clients. Common solutions in local training involve designing a specific auxiliary loss to regularize weight divergence or feature inconsistency. However, we discover that these approaches fall short of the expected performance because they ignore the existence of a vicious cycle between classifier divergence and feature mapping inconsistency across clients, such that client models are updated in inconsistent feature space with diverged classifiers. We then propose a simple yet effective framework named Federated learning with Feature Anchors (FedFA) to align the feature mappings and calibrate classifier across clients during local training, which allows client models updating in a shared feature space with consistent classifiers. We demonstrate that this modification brings similar classifiers and a virtuous cycle between feature consistency and classifier similarity across clients. Extensive experiments show that FedFA significantly outperforms the state-of-the-art federated learning algorithms on various image classification datasets under label and feature distribution skews.
翻译:联邦学习允许多个客户在不交换数据的情况下合作培训模型,从而保护数据隐私。不幸的是,在客户的不同数据下,该模型的性能严重退化。当地培训的共同解决办法是设计具体的辅助性损失,以规范重量差异或特征不一致。然而,我们发现,这些方法没有达到预期的绩效,因为它们忽视了分类差异和特征图绘制不同客户之间不一致的恶性循环,因此客户模型更新时特征空间不一致,与差异分类者不相符合。然后,我们提议了一个简单而有效的框架,名为Fededered Learning with Feature Ancors(FedFA),以便在本地培训中将特征图绘制和校准客户分类,使客户模型在共同特征空间更新,与一致分类者保持一致。我们证明,这种修改带来了类似的分类,使不同客户特征一致性和分类相似性之间的良性循环。广泛的实验显示,FedFA大大超越了在标签和特征分布模块下的各种图像分类数据集上采用最先进的联邦化学习算法。