Federated learning enables training on a massive number of edge devices. To improve flexibility and scalability, we propose a new asynchronous federated optimization algorithm. We prove that the proposed approach has near-linear convergence to a global optimum, for both strongly convex and a restricted family of non-convex problems. Empirical results show that the proposed algorithm converges quickly and tolerates staleness in various applications.
翻译:联邦学习有助于在大量的边缘设备上进行培训。为了提高灵活性和可缩放性,我们提出了一个新的非同步联合优化算法。我们证明,拟议方法近乎线性趋同,符合全球最佳要求,既能解决强势混凝土问题,也能解决有限的非混凝土问题。 经验性结果显示,拟议算法快速融合,可以容忍各种应用中的停滞。