Non-Independent and Identically Distributed (non- IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL). Several approaches to handle non-IID data such as personalized FL and federated multi-task learning (FMTL) are of great interest to research communities. In this work, first, we formulate the FMTL problem using Laplacian regularization to explicitly leverage the relationships among the models of clients for multi-task learning. Then, we introduce a new view of the FMTL problem, which in the first time shows that the formulated FMTL problem can be used for conventional FL and personalized FL. We also propose two algorithms FedU and dFedU to solve the formulated FMTL problem in communication-centralized and decentralized schemes, respectively. Theoretically, we prove that the convergence rates of both algorithms achieve linear speedup for strongly convex and sublinear speedup of order 1/2 for nonconvex objectives. Experimentally, we show that our algorithms outperform the algorithm FedAvg, FedProx, SCAFFOLD, and AFL in FL settings, MOCHA in FMTL settings, as well as pFedMe and Per-FedAvg in personalized FL settings.
翻译:在这项工作中,首先,我们利用拉普拉西亚规范化制定FMTL问题,以明确利用客户模式之间的关系,进行多任务学习。然后,我们引入了FMTL问题的新观点,首次显示FMTL问题可用于传统的FL和个性化FL。 我们还提出了FFDU和DFedU两种算法,分别用于解决通信集中和分散化计划中的FMTL问题。理论上,我们证明这两种算法的趋同率都达到了线性加速率,以利为非convelx目标提供第1/2级的线性快速排序。我们实验性地表明,在FFAVL和FFMFF的设置中,FFFAVFF和FFFFFF 的逻辑超越了FFMTA、FFFFFFF ProvL 和FFFAFFFFL的设置。