Federated learning allows multiple clients to collaboratively train a model without exchanging their data, thus preserving data privacy. Unfortunately, it suffers significant performance degradation under heterogeneous data at clients. Common solutions in local training involve designing a specific auxiliary loss to regularize weight divergence or feature inconsistency. However, we discover that these approaches fall short of the expected performance because they ignore the existence of a vicious cycle between classifier divergence and feature mapping inconsistency across clients, such that client models are updated in inconsistent feature space with diverged classifiers. We then propose a simple yet effective framework named Federated learning with Feature Anchors (FedFA) to align the feature mappings and calibrate classifier across clients during local training, which allows client models updating in a shared feature space with consistent classifiers. We demonstrate that this modification brings similar classifiers and a virtuous cycle between feature consistency and classifier similarity across clients. Extensive experiments show that FedFA significantly outperforms the state-of-the-art federated learning algorithms on various image classification datasets under label and feature distribution skews.
翻译:联邦学习允许多个客户端在不交换数据的情况下协同训练模型,从而保护数据隐私。然而,在客户端存在异构数据时,它会遭受重大的性能下降。本地训练中常见的解决方案包括设计特定的辅助损失以规范权重分歧或特征不一致性。然而,我们发现这些方法在性能方面达不到预期,因为它们忽略了客户端之间分类器分歧和特征映射不一致性之间存在的恶性循环,导致客户端模型在具有分歧分类器的不一致特征空间中进行更新。我们随后提出了一个简单而有效的框架,称为联邦学习与特征锚定(FedFA),用于在本地训练期间对齐特征映射并对客户端进行分类器校准,从而允许客户端模型在共享的一致特征空间中进行更新和一致的分类器。我们证明了这种修改带来了相似的分类器和特征一致性和分类器相似性之间的良性循环。广泛的实验表明,在标注和特征分布不均匀的各种图像分类数据集上,FedFA显着优于现有的联邦学习算法。