Federated Learning (FL) is a collaborative machine learning technique to train a global model without obtaining clients' private data. The main challenges in FL are statistical diversity among clients, limited computing capability among clients' equipments, and the excessive communication overhead between servers and clients. To address these challenges, we propose a novel sparse personalized federated learning scheme via maximizing correlation FedMac. By incorporating an approximated L1-norm and the correlation between client models and global model into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with non-sparse FL. Convergence analysis shows that the sparse constraints in FedMac do not affect the convergence rate of the global model, and theoretical results show that FedMac can achieve good sparse personalization, which is better than the personalized methods based on L2-norm. Experimentally, we demonstrate the benefits of this sparse personalization architecture compared with the state-of-the-art personalization methods (e.g. FedMac respectively achieves 98.95%, 99.37%, 90.90% and 89.06% accuracy on the MNIST, FMNIST, CIFAR-100 and Synthetic datasets under non-i.i.d. variants).
翻译:联邦学习联合会(FL)是一个合作的机械学习技术,用于培训一个全球模型,而没有获得客户的私人数据。FL的主要挑战在于客户的统计多样性、客户设备计算机能力有限以及服务器和客户之间的通信间接费用过多。为了应对这些挑战,我们提出一个新的零散个人化联合会学习计划,通过最大限度地扩大相关联FedMac。通过将大约L1-norm和客户模式与全球模型之间的相互关系纳入标准FL损失函数,统计多样性数据的性能得到改进,网络所需的通信和计算负荷与非粗略FL相比减少。 Converggence分析显示,FedMac的稀少限制不影响全球模型的趋同率,理论结果表明,FedMac可以实现相当分散的个人化,这比基于L2-norm的个化方法更好。 实验中,我们展示了这种分散的个人化结构与最新个人化方法相比的好处(例如,FDMac分别实现了98.95%、99.370%、90.90%和89.06 %的FMIRIST、FMIS-IFMF的不精确度)。