Federated Learning (FL) is one of the hot research topics, and it utilizes Machine Learning (ML) in a distributed manner without directly accessing private data on clients. However, FL faces many challenges, including the difficulty to obtain high accuracy, high communication cost between clients and the server, and security attacks related to adversarial ML. To tackle these three challenges, we propose an FL algorithm inspired by evolutionary techniques. The proposed algorithm groups clients randomly in many clusters, each with a model selected randomly to explore the performance of different models. The clusters are then trained in a repetitive process where the worst performing cluster is removed in each iteration until one cluster remains. In each iteration, some clients are expelled from clusters either due to using poisoned data or low performance. The surviving clients are exploited in the next iteration. The remaining cluster with surviving clients is then used for training the best FL model (i.e., remaining FL model). Communication cost is reduced since fewer clients are used in the final training of the FL model. To evaluate the performance of the proposed algorithm, we conduct a number of experiments using FEMNIST dataset and compare the result against the random FL algorithm. The experimental results show that the proposed algorithm outperforms the baseline algorithm in terms of accuracy, communication cost, and security.
翻译:联邦学习(FL)是热研究课题之一,它以分布方式使用机器学习(ML),而没有直接获得关于客户的私人数据。然而,FL面临许多挑战,包括难以获得高精度、客户和服务器之间通信成本高以及与对抗性ML有关的安全攻击。为了应对这三项挑战,我们提议了一个FL算法,由进化技术启发的FL算法。提议的算法客户在许多组群中随机选择一个模型,以探索不同模型的性能。然后,在重复过程中对机组进行培训,在每次迭代中消除最差的功能组群,直到一个组组群留下为止。在每一次迭代中,一些客户因使用有毒数据或低性能而被开除出集群。幸存的客户在下一个迭代中被利用。剩下的有幸存客户群群群群群用于培训最好的FL模型(即剩下的FL模型模型),通信费用减少,因为FL模型的最后培训使用客户较少。为了评估拟议的算法的性能,我们用FEMNIST数据库数据组来进行一些实验实验,并比较FL的逻辑分析结果。