One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full device participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, in this paper, we develop an inexact alternating direction method of multipliers (ADMM), which is both computation- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions. Furthermore, it has a high numerical performance compared with several state-of-the-art algorithms for federated learning.
翻译:联合学习的一个关键问题是如何发展高效优化算法。 大部分当前算法需要设备的全面参与和/或对趋同强加强有力的假设。 与本文中广泛使用的梯度下层算法不同,我们开发了一种不精确的交替的乘数法(ADMM),它既具有计算效率,又具有通信效率,能够克服分解器的影响,并在温和的条件下趋同。 此外,它与一些最先进的联结学习算法相比,具有很高的数字性能。