Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data. One challenge associated with FL is statistical diversity among clients, which restricts the global model from delivering good performance on each client's task. To address this, we propose an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL. Theoretically, we show that pFedMe's convergence rate is state-of-the-art: achieving quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. Experimentally, we verify that pFedMe excels at empirical performance compared with the vanilla FedAvg and Per-FedAvg, a meta-learning based personalized FL algorithm.
翻译:联邦学习(FL)是一种分权的、保护隐私的机器学习技术,在这一技术中,一组客户与服务器合作学习一个全球模型,而不分享客户的数据。与FL相关的一个挑战是客户之间的统计多样性,这限制了全球模型在每个客户的任务上取得良好业绩。为了解决这个问题,我们提出了一个个性化FL(pFedMe)算法,使用Moreau信封作为客户正常损失功能,帮助将个人化模型优化与全球模型学习脱钩,在个人化FL(FL)的双级问题模板中,将个人化模型优化与全球模型学习脱钩。理论上,我们表明PFedMe的趋同率是最新水平的:为了顺利的非convex目标,实现第2/3号命令的强烈螺旋和亚线性加速。我们实验性地证实,PFedMe与香草 FedAvg和Per-FedAvg(一种基于个人化FL算法的元学习算法)相比,在实验性性表现优。