Federated Learning (FL), a distributed machine learning technique has recently experienced tremendous growth in popularity due to its emphasis on user data privacy. However, the distributed computations of FL can result in constrained communication and drawn-out learning processes, necessitating the client-server communication cost optimization. The ratio of chosen clients and the quantity of local training passes are two hyperparameters that have a significant impact on FL performance. Due to different training preferences across various applications, it can be difficult for FL practitioners to manually select such hyperparameters. In our research paper, we introduce FedAVO, a novel FL algorithm that enhances communication effectiveness by selecting the best hyperparameters leveraging the African Vulture Optimizer (AVO). Our research demonstrates that the communication costs associated with FL operations can be substantially reduced by adopting AVO for FL hyperparameter adjustment. Through extensive evaluations of FedAVO on benchmark datasets, we show that FedAVO achieves significant improvement in terms of model accuracy and communication round, particularly with realistic cases of Non-IID datasets. Our extensive evaluation of the FedAVO algorithm identifies the optimal hyperparameters that are appropriately fitted for the benchmark datasets, eventually increasing global model accuracy by 6% in comparison to the state-of-the-art FL algorithms (such as FedAvg, FedProx, FedPSO, etc.).
翻译:联邦学习(Federated Learning, FL)是一种分布式机器学习技术,由于注重用户数据隐私保护,近年来受到了极大的关注。然而,FL的分布式计算可能导致通信受限和学习过程漫长,因此需要对客户端-服务器的通信成本进行优化。已选客户端的比率和本地训练迭代的次数是对FL性能有重要影响的两个超参数。由于不同应用场景的训练偏好不同,FL从业者手动选择超参数时可能会存在困难。在我们的研究论文中,我们引入了FedAVO,一种使用非洲秃鹫优化器(African Vulture Optimizer, AVO)优化超参数选择的新型FL算法。研究结果表明,采用AVO进行FL超参数调整可以显著降低FL操作的通信成本。通过在基准数据集上进行广泛的评估,我们展示了FedAVO在模型准确度和通信轮数方面的显著改进,特别是在非独立同分布数据集的现实情况下。FedAVO算法的广泛评估确定了适合基准数据集的最佳超参数,最终使全局模型准确度较现有FL算法(如FedAvg、FedProx、FedPSO等)提高了6%。