Federated Learning (FL) is a distributed learning approach that trains machine learning models across multiple devices while keeping their local data private. However, FL often faces challenges due to data heterogeneity, leading to inconsistent local optima among clients. These inconsistencies can cause unfavorable convergence behavior and generalization performance degradation. Existing studies often describe this issue through \textit{convergence analysis} on gradient norms, focusing on how well a model fits training data, or through \textit{algorithmic stability}, which examines the generalization gap. However, neither approach precisely captures the generalization performance of FL algorithms, especially for non-convex neural network training. In response, this paper introduces an innovative generalization dynamics analysis framework, namely \textit{Libra}, for algorithm-dependent excess risk minimization, highlighting the trade-offs between model stability and gradient norms. We present Libra towards a standard federated optimization framework and its variants using server momentum. Through this framework, we show that larger local steps or momentum accelerate convergence of gradient norms, while worsening model stability, yielding better excess risk. Experimental results on standard FL settings prove the insights of our theories. These insights can guide hyperparameter tuning and future algorithm design to achieve stronger generalization.
翻译:暂无翻译