Federated learning (FL) is a promising framework for privacy-preserving collaborative learning. In FL, the model training tasks are distributed to clients and only the model updates need to be collected at a central server. However, when being deployed at the mobile edge network, clients (e.g., smartphones and wearables) may have unpredictable availability and randomly drop out of any training iteration, which hinders FL from achieving the convergence. This paper tackles such a critical challenge of FL. In particular, we first investigate the convergence of the classical FedAvg algorithm with arbitrary client dropouts. We find that with the common choice of a decaying learning rate, FedAvg can only oscillate within the neighborhood of a stationary point of the global loss function, which is caused by the divergence between the aggregated update and the desired central update. Motivated by this new observation, we then design a novel training algorithm named MimiC, where the server modifies each received model update based on the previous ones. The proposed modification of the received model updates is able to mimic the imaginary central update irrespective of the dropout clients. The theoretical analysis of MimiC shows that the divergence between the aggregated update and the central update diminishes with a proper choice of the learning rates, leading to its convergence. Simulation results further demonstrate that MimiC maintains stable convergence performance in the presence of client dropouts and learns better models than the baseline methods.
翻译:暂无翻译