Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, workers transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. We show that our proposed method can reduce quantization error and converges faster compared with the methods directly quantizing the model updates.
翻译:联邦学习允许合作工作者在保护数据隐私的同时解决机器学习问题。 最近的研究解决了联合学习的各种挑战,但联合优化通信管理、学习可靠性和部署效率仍然是一个尚未解决的问题。 为此,我们提议了一个名为通过多元投票进行联合学习的新计划(FedVote ) 。 在FedVote的每轮通信中,工人通过加权投票将二进制或永久重量传递给通信管理管理低的服务器。模型参数通过加权投票汇总,以提高对拜占庭袭击的抗御能力。 在应用推论时,配有二进或长期重量的模型对边缘设备而言资源是方便的。 我们表明,我们提出的方法可以减少四进制错误,并比直接量化模型更新的方法更快。