We study the optimization aspects of personalized Federated Learning (FL). We develop a universal optimization theory applicable to all strongly convex personalized FL models in the literature. In particular, we propose a general personalized objective capable of recovering essentially any existing personalized FL objective as a special case. We design several optimization techniques to minimize the general objective, namely a tailored variant of Local SGD and variants of accelerated coordinate descent/accelerated SVRCD. We demonstrate the practicality and/or optimality of our methods both in terms of communication and local computation. Surprisingly enough, our general optimization theory is capable of recovering best-known communication and computation guarantees for solving specific personalized FL objectives.
翻译:我们研究了个性化联邦学习(FL)的优化方面。我们开发了一种普遍优化理论,该理论适用于文献中所有非常精细的个性化FL模型。特别是,我们提出了一个一般性的个性化目标,基本上能够将任何现有的个性化FL目标作为特例加以恢复。我们设计了几种优化技术,以尽量减少总目标,即当地SGD的量身定制变体和加速协调血统/加速的SVRCD的变体。我们从通信和地方计算两方面显示了我们方法的实用性和(或)最佳性。奇怪的是,我们的一般优化理论足以恢复最著名的通信和计算保障,解决个性化FL具体目标。