Federated Learning (FL) since proposed has been applied in many fields, such as credit assessment, medical, etc. Because of the difference in the network or computing resource, the clients may not update their gradients at the same time that may take a lot of time to wait or idle. That's why Asynchronous Federated Learning (AFL) method is needed. The main bottleneck in AFL is communication. How to find a balance between the model performance and the communication cost is a challenge in AFL. This paper proposed a novel AFL framework VAFL. And we verified the performance of the algorithm through sufficient experiments. The experiments show that VAFL can reduce the communication times about 51.02\% with 48.23\% average communication compression rate and allow the model to be converged faster. The code is available at \url{https://github.com/RobAI-Lab/VAFL}
翻译:由于网络或计算资源的差异,客户可能不会同时更新其梯度,这可能需要很多时间等待或闲置。这就是为什么需要Asynchronous Federal Learning(AFL)方法。AFL的主要瓶颈是沟通。如何在模型性能和通信成本之间找到平衡是ALF的一个挑战。本文提出了一个新的ALF VAFL框架。我们通过足够的实验核实了算法的性能。实验表明VAFL可以将通信时间减少约51.02 ⁇,平均通信压缩速率为48.23 ⁇,并使该模型能够更快地趋同。该代码可在以下https://github.com/RobAI-Lab/VAFLD}查阅。