Federated Learning (FL) is a collaborative machine learning technique to train a global model without obtaining clients' private data. The main challenges in FL are statistical diversity among clients, limited computing capability among client equipments and the excessive communication overhead and long latency between server and clients. To address these problems, we propose a novel personalized federated learning via maximizing correlation pFedMac), and further extend it to sparse and hierarchical models. By minimizing loss functions including the properties of an approximated L1-norm and the hierarchical correlation, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced. Theoretical proofs show that pFedMac performs better than the L2-norm distance based personalization methods. Experimentally, we demonstrate the benefits of this sparse hierarchical personalization architecture compared with the state-of-the-art personalization methods and their extensions (e.g. pFedMac achieves 99.75% accuracy on MNIST and 87.27% accuracy on Synthetic under heterogeneous and non-i.i.d data distributions)
翻译:联邦学习联盟(FL)是一种合作的机械学习技术,用于培训一个全球模型,而没有获得客户的私人数据。FL的主要挑战在于客户的统计多样性、客户设备计算机能力有限、服务器和客户之间的通信管理费用过大以及长期悬浮。为了解决这些问题,我们提议通过最大限度地扩大相关联系pFedMac来进行新的个性化联式学习,并将这种学习进一步扩大到稀疏和等级模式。通过最大限度地减少损失功能,包括大约L1-Norm和等级相关性的特性,改进了统计多样性数据的性能,减少了网络所需的通信和计算负荷。理论证据表明,pFedMac的表现优于基于L2-Norm个人化方法的距离。我们实验性地展示了这种分散的等级化个人化结构与最先进的个人化方法及其延伸相比的好处(例如,PFedMac在MIS上实现了99.75%的准确度,在多类和非类数据分配下合成技术方面达到了87.27%的精度)。