Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency. As a result, only a small subset of clients can participate in FL at a given time. It is important to understand how partial client participation affects convergence, but most existing works have either considered idealized participation patterns or obtained results with non-zero optimality error for generic patterns. In this paper, we provide a unified convergence analysis for FL with arbitrary client participation. We first introduce a generalized version of federated averaging (FedAvg) that amplifies parameter updates at an interval of multiple FL rounds. Then, we present a novel analysis that captures the effect of client participation in a single term. By analyzing this term, we obtain convergence upper bounds for a wide range of participation patterns, including both non-stochastic and stochastic cases, which match either the lower bound of stochastic gradient descent (SGD) or the state-of-the-art results in specific settings. We also discuss various insights, recommendations, and experimental results.
翻译:联邦学习(FL)面临间歇性客户可用性和计算/通信效率的挑战。因此,只有一小部分客户可以在特定时间参加FL。重要的是要了解部分客户参与如何影响趋同,但大多数现有作品要么考虑理想化参与模式,要么在通用模式方面得出非零最佳性错误的结果。在本文中,我们为FL提供了统一的趋同分析,而客户则任意参与。我们首先引入了通用的联邦平均比例(FedAvg),在多个FL回合的间隔内放大参数更新。然后,我们提出了一个新颖的分析,在单一术语内捕捉客户参与的效果。通过分析这一术语,我们获得了广泛参与模式的趋同,包括非随机性和随机性案例,或者与随机性梯度梯度下降(SGD)的较低约束或特定环境中的最新结果相匹配。我们还讨论各种洞察力、建议和实验结果。