Federated Learning (FL) and Split Learning (SL) are two popular paradigms of distributed machine learning. By offloading the computation-intensive portions to the server, SL is promising for deep model training on resource-constrained devices, yet still lacking of rigorous convergence analysis. In this paper, we derive the convergence guarantees of Sequential SL (SSL, the vanilla case of SL that conducts the model training in sequence) for strongly/general/non-convex objectives on heterogeneous data. Notably, the derived guarantees suggest that SSL is better than Federated Averaging (FedAvg, the most popular algorithm in FL) on heterogeneous data. We validate the counterintuitive analysis result empirically on extremely heterogeneous data.
翻译:暂无翻译