Personalized Federated Learning has recently seen tremendous progress, allowing the design of novel machine learning applications preserving privacy of the data used for training. Existing theoretical results in this field mainly focus on distributed optimization under minimization problems. This paper is the first to study PFL for saddle point problems, which cover a broader class of optimization tasks and are thus of more relevance for applications than the minimization. In this work, we consider a recently proposed PFL setting with the mixing objective function, an approach combining the learning of a global model together with local distributed learners. Unlike most of the previous papers, which considered only the centralized setting, we work in a more general and decentralized setup. This allows to design and to analyze more practical and federated ways to connect devices to the network. We present two new algorithms for our problem. A theoretical analysis of the methods is presented for smooth (strongly-)convex-(strongly-)concave saddle point problems. We also demonstrate the effectiveness of our problem formulation and the proposed algorithms on experiments with neural networks with adversarial noise.
翻译:个人化的联邦学习组织最近取得了巨大进展,能够设计新的机器学习应用程序,保护用于培训的数据的隐私。这个领域现有的理论结果主要侧重于在尽量减少问题下分配优化。本文是第一个针对马鞍点问题研究PFL的论文,涉及范围更广的优化任务,因此对应用比最小化更具相关性。在这项工作中,我们考虑了最近提出的具有混合目标功能的PFL设置,这是一种将学习全球模型与当地分布的学习者相结合的方法。与以前大多数只考虑集中设置的文件不同,我们的工作是比较笼统和分散的设置。这样可以设计和分析更实用和联合的连接设备与网络连接的方法。我们提出了我们的问题的两种新的算法。对方法的理论分析是为了解决(强力的)混凝结点问题。我们还展示了我们用对立噪音进行神经网络实验的问题和拟议算法的有效性。