Federated Learning (FL) has shown considerable promise in Machine Learning (ML) across numerous devices for privacy protection, efficient data utilization, and dynamic collaboration. However, mobile devices typically have limited and heterogeneous computational capabilities, and different devices may even have different tasks. This client heterogeneity is a major bottleneck hindering the practical application of FL. Existing work mainly focuses on mitigating FL's computation and communication overhead of a single task while overlooking the computing resource heterogeneity issue of different devices in FL. To tackle this, we design FedAPTA, a federated multi-task learning framework. FedAPTA overcomes computing resource heterogeneity through the developed layer-wise model pruning technique, which reduces local model size while considering both data and device heterogeneity. To aggregate structurally heterogeneous local models of different tasks, we introduce a heterogeneous model recovery strategy and a task-aware model aggregation method that enables the aggregation through infilling local model architecture with the shared global model and clustering local models according to their specific tasks. We deploy FedAPTA on a realistic FL platform and benchmark it against nine SOTA FL methods. The experimental outcomes demonstrate that the proposed FedAPTA considerably outperforms the state-of-the-art FL methods by up to 4.23\%. Our code is available at https://github.com/Zhenzovo/FedAPTA.
翻译:暂无翻译