Federated learning (FL) facilitates edge devices to cooperatively train a global shared model while maintaining the training data locally and privately. However, a common but impractical assumption in FL is that the participating edge devices possess the same required resources and share identical global model architecture. In this study, we propose a novel FL method called Federated Intermediate Layers Learning (FedIN), supporting heterogeneous models without utilizing any public dataset. The training models in FedIN are divided into three parts, including an extractor, the intermediate layers, and a classifier. The model architectures of the extractor and classifier are the same in all devices to maintain the consistency of the intermediate layer features, while the architectures of the intermediate layers can vary for heterogeneous devices according to their resource capacities. To exploit the knowledge from features, we propose IN training, training the intermediate layers in line with the features from other clients. Additionally, we formulate and solve a convex optimization problem to mitigate the gradient divergence problem induced by the conflicts between the IN training and the local training. The experiment results show that FedIN achieves the best performance in the heterogeneous model environment compared with the state-of-the-art algorithms. Furthermore, our ablation study demonstrates the effectiveness of IN training and the solution to the convex optimization problem.
翻译:联邦学习 (FL)可以在维持本地和隐私的情况下,协同训练全局共享模型。然而FL的一个普遍而不切实际的假设是参与的边缘设备拥有相同的资源,并且具有相同的全局模型架构。在本研究中,我们提出了一种新的FL方法称为联邦中间层学习 (FedIN),支持异构模型而不利用任何公共数据集。 FedIN中的训练模型被分为三部分,包括提取器,中间层和分类器。提取器和分类器的模型架构在所有设备中都是相同的,以维护中间层特征的一致性,而中间层的架构可以根据异构设备的资源容量进行更改。为了利用特征的知识,我们提出了IN训练,按照其他客户端的特征训练中间层。此外,我们制定并解决了一个凸优化问题,以缓解因IN训练和本地训练之间的冲突而引起的梯度发散问题。实验结果表明,与现有算法相比,FedIN在异构模型环境中实现了最佳性能。此外,我们的消融研究证明了IN训练和凸优化问题解决方案的有效性。