Data scarcity and heterogeneity pose significant performance challenges for personalized federated learning, and these challenges are mainly reflected in overfitting and low precision in existing methods. To overcome these challenges, a multi-layer multi-fusion strategy framework is proposed in this paper, i.e., the server adopts the network layer parameters of each client upload model as the basic unit of fusion for information-sharing calculation. Then, a new fusion strategy combining personalized and generic is purposefully proposed, and the network layer number fusion threshold of each fusion strategy is designed according to the network layer function. Under this mechanism, the L2-Norm negative exponential similarity metric is employed to calculate the fusion weights of the corresponding feature extraction layer parameters for each client, thus improving the efficiency of heterogeneous data personalized collaboration. Meanwhile, the federated global optimal model approximation fusion strategy is adopted in the network full-connect layer, and this generic fusion strategy alleviates the overfitting introduced by forceful personalized. Finally, the experimental results show that the proposed method is superior to the state-of-the-art methods.
翻译:为了克服这些挑战,本文件提出了一个多层次的多融合战略框架,即服务器采用每个客户上传模式的网络层参数作为信息分享计算聚合的基本单位。然后,有意识地提出了将个性化和通用相结合的新的融合战略,并且根据网络层功能设计了每个聚合战略的网络层数集合阈值。在这个机制下,采用了L2-Norm负指数相似度指标来计算每个客户相应的特征提取层参数的聚合权重,从而提高了数据个人化协作的效率。与此同时,在网络全连接层采用了全球最佳模型组合战略,而这种通用融合战略减轻了个人化的过度匹配。最后,实验结果表明,拟议的方法优于最先进的方法。