We study the performance of federated learning algorithms and their variants in an asymptotic framework. Our starting point is the formulation of federated learning as a multi-criterion objective, where the goal is to minimize each client's loss using information from all of the clients. We analyze a linear regression model, where, for a given client, we theoretically compare the performance of various algorithms in the high-dimensional asymptotic limit. This asymptotic multi-criterion approach naturally models the high-dimensional, many-device nature of federated learning and suggests that personalization is central to federated learning. In this paper, we investigate how some sophisticated personalization algorithms fare against simple fine-tuning baselines. In particular, our theory suggests that Federated Averaging with client fine-tuning is competitive than more intricate meta-learning and proximal-regularized approaches. In addition to being conceptually simpler, our fine-tuning-based methods are computationally more efficient than their competitors. We corroborate our theoretical claims with extensive experiments on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.
翻译:我们在一个空洞的框架内研究联合会式学习算法及其变体的绩效。 我们的出发点是将联合会式学习发展成一个多标准目标,目标是利用所有客户的信息尽量减少每个客户的损失。 我们分析了线性回归模型,对于一个特定客户来说,我们从理论上比较了高维无症状限制中各种算法的绩效。这种无症状多标准方法自然地模拟了联合会式学习的高维、多功能性质,并表明个性化是联合会式学习的核心。在本文中,我们调查了一些复杂的个人化算法如何与简单的微调基线相较。特别是,我们的理论表明,对客户进行微调的联动比更为复杂的元学习和准非标准化的方法具有竞争性。除了概念上简单,我们的微调方法比竞争者更有效率。我们用对欧洲MINIST、CIFAR-100、莎士良和超流动数据的广泛实验来证实我们的理论主张。