One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full devices participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, this paper develops an inexact alternating direction method of multipliers (ADMM), which is both computation and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions.
翻译:联合学习的一个关键问题是如何发展高效优化算法。 大部分现有算法需要设备的全面参与和/或对趋同强加强有力的假设。 本文与广泛使用的梯度下位算法不同,它开发了一种不精确的交替式乘数法(ADMM ), 即计算法和通信效率高,能够消除累赘效应,并在温和条件下趋同。