Federated learning (FL) facilitates multiple clients to jointly train a machine learning model without sharing their private data. However, Non-IID data of clients presents a tough challenge for FL. Existing personalized FL approaches rely heavily on the default treatment of one complete model as a basic unit and ignore the significance of different layers on Non-IID data of clients. In this work, we propose a new framework, federated model components self-attention (FedMCSA), to handle Non-IID data in FL, which employs model components self-attention mechanism to granularly promote cooperation between different clients. This mechanism facilitates collaboration between similar model components while reducing interference between model components with large differences. We conduct extensive experiments to demonstrate that FedMCSA outperforms the previous methods on four benchmark datasets. Furthermore, we empirically show the effectiveness of the model components self-attention mechanism, which is complementary to existing personalized FL and can significantly improve the performance of FL.
翻译:联邦学习(FL)促进多个客户联合培训机器学习模式,而不必分享其私人数据;然而,客户的非IID数据对FL来说是一项艰巨的挑战。现有的个性化FL方法严重依赖默认地将一个完整模型作为基本单元处理,忽视了客户非IID数据不同层次的重要性。在这项工作中,我们提出了一个新框架,即Federated模型组件自留(FedMCSA),处理FL的非IID数据,该模型组件自留机制采用模型组件自留机制,以大力促进不同客户之间的合作。这一机制促进类似模型组件之间的合作,同时减少差异很大的模型组件之间的干扰。我们进行了广泛的实验,以证明FedMCSA在四个基准数据集上优于以往的方法。此外,我们从经验上展示了模型组件自留机制的有效性,该机制是对现有个性化FL的补充,可以显著改善FL的性能。