Federated learning has attracted increasing attention with the emergence of distributed data. While extensive federated learning algorithms have been proposed for the non-convex distributed problem, federated learning in practice still faces numerous challenges, such as the large training iterations to converge since the sizes of models and datasets keep increasing, and the lack of adaptivity by SGD-based model updates. Meanwhile, the study of adaptive methods in federated learning is scarce and existing works either lack a complete theoretical convergence guarantee or have slow sample complexity. In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on the momentum-based variance-reduced technique in cross-silo FL. We first explore how to design the adaptive algorithm in the FL setting. By providing a counter-example, we prove that a simple combination of FL and adaptive methods could lead to divergence. More importantly, we provide a convergence analysis for our method and prove that our algorithm is the first adaptive FL algorithm to reach the best-known samples $O(\epsilon^{-3})$ and $O(\epsilon^{-2})$ communication rounds to find an $\epsilon$-stationary point without large batches. The experimental results on the language modeling task and image classification task with heterogeneous data demonstrate the efficiency of our algorithms.
翻译:随着分布式数据的出现,联邦学习吸引了越来越多的注意力。虽然已经为非混凝土分布式的问题提出了广泛的联合学习算法,但在实践中,联邦学习仍面临许多挑战,例如由于模型和数据集的规模不断增大而需要大量培训迭代,以及基于SGD的模型更新缺乏适应性。与此同时,对联合学习中的适应性方法的研究稀缺,现有工作要么缺乏完全的理论趋同保证,要么样本复杂度较慢。在本文中,我们提议根据基于动力的降低差异技术,在跨SIlo FL中采用高效的适应性算法(即FAFED)。我们首先探索如何在FL设置中设计适应性算法。通过提供反实例,我们证明将FL和适应性方法简单结合可能会导致差异。更重要的是,我们为我们的方法提供了一种趋同性分析,并证明我们的算法是第一个适应性FLA(即FAFL)算法,以美元和$OO(EPLON)的降低差异技术。我们首先探索如何设计FLA值的适应性算法方法,然后用一个大型数据级数据分析。我们的数据级计算,然后是用美元和美元计算。