Federated learning (FL) has become a hot research area in enabling the collaborative training of machine learning models among multiple clients that hold sensitive local data. Nevertheless, unconstrained federated optimization has been studied mainly using stochastic gradient descent (SGD), which may converge slowly, and constrained federated optimization, which is more challenging, has not been investigated so far. This paper investigates sample-based and feature-based federated optimization, respectively, and considers both unconstrained and constrained nonconvex problems for each of them. First, we propose FL algorithms using stochastic successive convex approximation (SSCA) and mini-batch techniques. These algorithms can adequately exploit the structures of the objective and constraint functions and incrementally utilize samples. We show that the proposed FL algorithms converge to stationary points and Karush-Kuhn-Tucker (KKT) points of the respective unconstrained and constrained nonconvex problems, respectively. Next, we provide algorithm examples with appealing computational complexity and communication load per communication round. We show that the proposed algorithm examples for unconstrained federated optimization are identical to FL algorithms via momentum SGD and provide an analytical connection between SSCA and momentum SGD. Finally, numerical experiments demonstrate the inherent advantages of the proposed algorithms in convergence speeds, communication and computation costs, and model specifications.
翻译:联邦学习(FL)已经成为在掌握敏感当地数据的多个客户之间合作培训机器学习模型的一个热研究领域,然而,在研究未受限制的联邦优化时,主要采用随机梯度梯度下降(SGD),这种梯度下降可能缓慢趋同,而限制的联邦优化则更具挑战性,迄今尚未对此进行调查。本文分别调查基于样本和基于特征的联邦优化,并审议其中每个客户都存在不受限制和受限制的非convex问题。首先,我们建议采用随机相接的相接螺旋紧凑(SSCA)和微型批次技术,采用FL算法。这些算法可以充分利用目标和制约功能的结构,并逐步利用样本。我们表明,拟议的FL算法将分别与固定点和Karush-Kuhn-Tucker(KT)点趋同各自未受限制和受限制的非convex问题模型。我们提供了各种算法例子,用以吸引计算复杂性和通信负荷。我们表明,拟议用于未受限制的联结的联式组合和制约制约的精度优化优化功能优化功能结构,通过S&AGA/Slevlagal 分析成本和Slational 和Slevalalalevalalalal dalevationalal 提供一种Sleval 和Slevationalgalgalgalgalgalgalgalgalgal 和Slevalgalgal 和Slgalgalgalgalgalgalgalgalgal 。