We study the performance of federated learning algorithms and their variants in an asymptotic framework. Our starting point is the formulation of federated learning as a multi-criterion objective, where the goal is to minimize each client's loss using information from all of the clients. We propose a linear regression model, where, for a given client, we theoretically compare the performance of various algorithms in the high-dimensional asymptotic limit. This asymptotic multi-criterion approach naturally models the high-dimensional, many-device nature of federated learning and suggests that personalization is central to federated learning. Our theory suggests that Fine-tuned Federated Averaging (FTFA), i.e., Federated Averaging followed by local training, and the ridge regularized variant Ridge-tuned Federated Averaging (RTFA) are competitive with more sophisticated meta-learning and proximal-regularized approaches. In addition to being conceptually simpler, FTFA and RTFA are computationally more efficient than its competitors. We corroborate our theoretical claims with extensive experiments on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.
翻译:我们提出线性回归模型,对特定客户来说,我们从理论上比较高度无症状限制中各种算法的性能;这种无症状多层次的自然方法模拟了联邦学习的高维、多层次性质,并表明个人化是联邦学习的核心。我们的理论表明,微调联邦动画(FTFA),即以当地培训为后继的联邦动画(FTFA),以及脊脊椎变形(Ridge-调整联邦动画(RTFA)),具有竞争力,具有较复杂的元学习和准氧化法的正规化方法。除了在概念上较简单外,FTFA和RTFA的计算效率比其竞争者要高。我们用大量关于ASMALA、EMANSA、EMANSLA和REMALA的数据流的实验证实了我们的理论主张。