Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases. The \textit{FedBuff} algorithm (Nguyen et al., 2022) alleviates this problem by allowing asynchronous updates (staleness), which enhances the scalability of training while preserving privacy via secure aggregation. We revisit the \textit{FedBuff} algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm. This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.
翻译:当活跃客户数量增加时,同步更新可能会损害跨设备联合学习的效率。\ textit{FedBuff} 算法(Nguyen等人,2022年)允许无同步更新(stalenity)来缓解这一问题,这提高了培训的可扩展性,同时通过安全的聚合保护隐私。我们重新审视无同步联合学习的\ textit{FedBuff} 算法,并通过从梯度规范中删除约束性假设来扩展现有分析。本文从理论角度分析了在考虑数据、批量大小和延迟等异性时这种算法的趋同率。