In contrast to centralized model training that involves data collection, federated learning (FL) enables remote clients to collaboratively train a model without exposing their private data. However, model performance usually degrades in FL due to the heterogeneous data generated by clients of diverse characteristics. One promising strategy to maintain good performance is by limiting the local training from drifting far away from the global model. Previous studies accomplish this by regularizing the distance between the representations learned by the local and global models. However, they only consider representations from the early layers of a model or the layer preceding the output layer. In this study, we introduce FedIntR, which provides a more fine-grained regularization by integrating the representations of intermediate layers into the local training process. Specifically, FedIntR computes a regularization term that encourages the closeness between the intermediate layer representations of the local and global models. Additionally, FedIntR automatically determines the contribution of each layer's representation to the regularization term based on the similarity between local and global representations. We conduct extensive experiments on various datasets to show that FedIntR can achieve equivalent or higher performance compared to the state-of-the-art approaches.
翻译:与涉及数据收集的集中模式培训相比,联邦学习(FL)使边远客户能够在不披露其私人数据的情况下合作培训模型;然而,由于不同特点的客户产生的不同数据,模型性能通常在FL中下降。保持良好业绩的一个有希望的战略是限制当地培训远离全球模型。以前的研究通过将地方和全球模型所学表现之间的距离正规化来实现这一点。然而,它们只考虑模型早期层次或产出层前一层的表述。我们在本研究中引入FedIntR,通过将中间层代表纳入当地培训进程,使FedIntR更精细地正规化。具体地说,FedIntR计算了一个正规化术语,鼓励地方和全球模型中间层次代表之间的接近。此外,FedIntR根据地方和全球代表的相似性,自动确定每一层代表对正规化术语的贡献。我们在各种数据集上进行了广泛的实验,以显示FedIntR能够实现与州一级方法的同等或更高绩效。