Personalized Federated Learning (PFL) has recently seen tremendous progress, allowing the design of novel machine learning applications to preserve the privacy of the training data. Existing theoretical results in this field mainly focus on distributed optimization for minimization problems. This paper is the first to study PFL for saddle point problems (which cover a broader class of optimization problems), allowing for a more rich class of applications requiring more than just solving minimization problems. In this work, we consider a recently proposed PFL setting with the mixing objective function, an approach combining the learning of a global model together with locally distributed learners. Unlike most previous work, which considered only the centralized setting, we work in a more general and decentralized setup that allows us to design and analyze more practical and federated ways to connect devices to the network. We proposed new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly-)convex-(strongly-)concave saddle point problems in stochastic and deterministic cases. Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
翻译:个人化联邦学习(PFL)最近取得了巨大进展,使得设计新的机器学习应用软件能够保护培训数据的隐私。这个领域现有的理论结果主要侧重于分散优化以尽量减少问题。本文是第一个针对马鞍点问题(包括更广泛的优化问题类别)对PFL进行研究的论文,允许更丰富的应用类别,不仅需要解决最小化问题。在这项工作中,我们认为最近提出的PFL设置与混合目标功能相结合,一种将学习全球模型与当地分布的学习者相结合的方法。与大多数以前只考虑集中设置的工作不同,我们在一个更加笼统和分散的设置中工作,使我们能够设计和分析更实用和联合的连接设备与网络连接的方法。我们提出了解决这一问题的新算法,并对平滑(强力)convex-(强力)convex-convevey 马鞍点问题进行理论分析。双线问题和有对抗性噪音的神经网络的数值实验证明了拟议方法的有效性。