Federated learning (FL) is a subfield of machine learning where multiple clients try to collaboratively learn a model over a network under communication constraints. We consider finite-sum federated optimization under a second-order function similarity condition and strong convexity, and propose two new algorithms: SVRP and Catalyzed SVRP. This second-order similarity condition has grown popular recently, and is satisfied in many applications including distributed statistical learning and differentially private empirical risk minimization. The first algorithm, SVRP, combines approximate stochastic proximal point evaluations, client sampling, and variance reduction. We show that SVRP is communication efficient and achieves superior performance to many existing algorithms when function similarity is high enough. Our second algorithm, Catalyzed SVRP, is a Catalyst-accelerated variant of SVRP that achieves even better performance and uniformly improves upon existing algorithms for federated optimization under second-order similarity and strong convexity. In the course of analyzing these algorithms, we provide a new analysis of the Stochastic Proximal Point Method (SPPM) that might be of independent interest. Our analysis of SPPM is simple, allows for approximate proximal point evaluations, does not require any smoothness assumptions, and shows a clear benefit in communication complexity over ordinary distributed stochastic gradient descent.
翻译:联邦学习(FL)是机器学习的一个子领域,在这种学习中,多个客户试图在通信制约下在一个网络中合作学习一个模型。我们考虑在二阶功能相似性条件下,在二阶功能相似性条件和很强的精密性下,实现有限和联合优化,并提出两种新的算法:SVRP和催化 SVRP。这种二级相似性条件最近越来越受欢迎,在许多应用程序中都感到满意,包括分布式统计学习和差异性私人经验风险最小化。第一个算法(SVRP)将大约的随机性准点评价、客户抽样和差异减少结合起来。我们表明SVRP是通信效率高的,在功能相似性足够高的情况下,其业绩优于许多现有算法。我们的第二个算法(Clalyzed SVRP)是SVRP的加速性变体变体变体,其业绩甚至更佳性,在二阶下统一地改进现有的饱性优化的算法。在分析这些算法过程中,我们提供了对Stochaticrical Procial Procial Procial Procialimex ex ex 分析新的分析,我们对普通的精确性偏差分析可能不具有任何的兴趣。