Federated Learning is a promising approach for learning from user data while preserving data privacy. However, the high requirements of the model training process make it difficult for clients with limited memory or bandwidth to participate. To tackle this problem, Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training. This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead. To overcome these limitations, we propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases, presenting qualitative directives for determining the best aggregation timeframes to reduce computation and communication expenses. By implementing local differential privacy at the client and edge server levels, we enhance privacy during local model parameter updates. Our experiments using CIFAR-10 and MNIST datasets show that HierSFL outperforms standard FL approaches with better training accuracy, training time, and communication-computing trade-offs. HierSFL offers a promising solution to mobile edge computing's challenges, ultimately leading to faster content delivery and improved mobile service quality.
翻译:暂无翻译