We study the optimization aspects of personalized Federated Learning (FL). We propose general optimizers that can be used to solve essentially any existing personalized FL objective, namely a tailored variant of Local SGD and variants of accelerated coordinate descent/accelerated SVRCD. By studying a general personalized objective that is capable of recovering essentially any existing personalized FL objective as a special case, we develop a universal optimization theory applicable to all strongly convex personalized FL models in the literature. We demonstrate the practicality and/or optimality of our methods both in terms of communication and local computation. Surprisingly enough, our general optimization solvers and theory are capable of recovering best-known communication and computation guarantees for solving specific personalized FL objectives. Thus, our proposed methods can be taken as universal optimizers that make the design of task-specific optimizers unnecessary in many cases.
翻译:我们研究了个性化联邦学习(FL)的优化方面。我们建议了可用于从根本上解决任何现有个性化FL目标的一般优化因素,即当地SGD的量身定制变方和加速协调下降/加速下降 SVRCD的变方。通过研究一个能够基本上将任何现有个性化FL目标作为特例加以恢复的一般性个性化目标,我们开发了一种通用优化理论,该理论适用于文献中所有强精细的个性化FL模型。我们展示了我们方法在通信和地方计算方面的实用性和(或)最佳性。令人惊讶的是,我们的一般优化解决方案和理论能够恢复最著名的通信和计算保证,从而解决具体的个性化FL目标。因此,我们提出的方法可以被视为通用的优化因素,使得许多情况下没有必要设计特定任务优化器。