Federated learning has burgeoned recently in machine learning, giving rise to a variety of research topics. Popular optimization algorithms are based on the frameworks of the (stochastic) gradient descent methods or the alternating direction method of multipliers. In this paper, we deploy an exact penalty method to deal with federated learning and propose an algorithm, FedEPM, that enables to tackle four critical issues in federated learning: communication efficiency, computational complexity, stragglers' effect, and data privacy. Moreover, it is proven to be convergent and testified to have high numerical performance.
翻译:最近,联邦学习在机器学习中迅速发展,引起了各种研究课题。大众优化算法基于(随机)梯度下降法或乘数交替方向法的框架。在本文中,我们运用了一种精确的惩罚方法来处理联盟学习并提出一种算法,即FedEMM,它能够解决联邦学习的四个关键问题:通信效率、计算复杂性、分流效应和数据隐私。此外,它被证明是趋同的,并且证明具有很高的数字性能。