Federated learning is an emerging machine learning paradigm that enables devices to train collaboratively without exchanging their local data. The clients participating in the training process are a random subset selected from the pool of clients. The above procedure is called client selection which is an important area in federated learning as it highly impacts the convergence rate, learning efficiency, and generalization. In this work, we introduce client filtering in federated learning (FilFL), a new approach to optimize client selection and training. FilFL first filters the active clients by choosing a subset of them that maximizes a specific objective function; then, a client selection method is applied to that subset. We provide a thorough analysis of its convergence in a heterogeneous setting. Empirical results demonstrate several benefits to our approach, including improved learning efficiency, accelerated convergence, $2$-$3\times$ faster, and higher test accuracy, around $2$-$10$ percentage points higher.
翻译:联邦学习是一种新兴的机器学习模式,它使各种设备能够在不交换当地数据的情况下进行协作培训,参与培训过程的客户是从客户库中随机挑选的子集。上述程序称为客户选择,这是联邦学习的一个重要领域,因为它对融合率、学习效率和一般化产生很大影响。在这项工作中,我们引入了联邦学习(FilFLF)过滤客户,这是优化客户选择和培训的一种新方法。FilFLL首先通过选择能够最大限度地发挥特定目标功能的一部分筛选活跃客户;然后,对该子集采用客户选择方法。我们透彻分析在多元环境中的趋同情况。实证结果表明我们的方法有若干好处,包括提高学习效率、加快融合速度、提高2美元-3美元的时间和更高的测试精度,提高大约2美元-10美元的百分比。