Federated learning is an emerging machine learning paradigm that enables clients to train collaboratively without exchanging local data. The clients participating in the training process have a crucial impact on the convergence rate, learning efficiency, and model generalization. In this work, we propose FilFL, a new approach to optimizing client participation and training by introducing client filtering. FilFL periodically filters the available clients to identify a subset that maximizes a combinatorial objective function using an efficient greedy filtering algorithm. From this filtered-in subset, clients are then selected for the training process. We provide a thorough analysis of FilFL convergence in a heterogeneous setting and evaluate its performance across diverse vision and language tasks and realistic federated scenarios with time-varying client availability. Our empirical results demonstrate several benefits of our approach, including improved learning efficiency, faster convergence, and up to 10 percentage points higher test accuracy compared to scenarios where client filtering is not utilized.
翻译:暂无翻译