Federated learning (FL) is an emerging privacy-preserving distributed learning scheme. Due to the large model size and frequent model aggregation, FL suffers from critical communication bottleneck. Many techniques have been proposed to reduce the communication volume, including model compression and quantization. Existing adaptive quantization schemes use ascending-trend quantization where the quantizaion level increases with the training stages. In this paper, we formulate the problem as optimizing the training convergence rate for a given communication volume. The result shows that the optimal quantizaiton level can be represented by two factors, i.e., the training loss and the range of model updates, and it is preferable to decrease the quantization level rather than increase. Then, we propose two descending quantization schemes based on the training loss and model range. Experimental results show that proposed schemes not only reduce the communication volume but also help FL converge faster, when compared with current ascending quantization.
翻译:联邦学习(FL)是一个新兴的隐私保护分布式学习计划。由于模型规模大,且模型集成频繁,FL面临关键的通信瓶颈。许多技术都提议减少通信量,包括模型压缩和量化。现有的适应性量化计划使用升级-trend 量化,因为量化水平随着培训阶段的增加而增加。在本文中,我们将问题表述为优化特定通信量的培训趋同率。结果显示,最佳的夸迪萨伊顿水平可以由两个因素(即培训损失和模型更新范围)来代表,最好降低量化水平,而不是增加。然后,我们根据培训损失和模型范围提出两个递增的量化计划。实验结果表明,拟议的计划不仅减少了通信量,而且帮助FL更快地融合,如果与当前的递增量化相比。