Federated Learning (FL) incurs high communication overhead, which can be greatly alleviated by compression for model updates. Yet the tradeoff between compression and model accuracy in the networked environment remains unclear and, for simplicity, most implementations adopt a fixed compression rate only. In this paper, we for the first time systematically examine this tradeoff, identifying the influence of the compression error on the final model accuracy with respect to the learning rate. Specifically, we factor the compression error of each global iteration into the convergence rate analysis under both strongly convex and non-convex loss functions. We then present an adaptation framework to maximize the final model accuracy by strategically adjusting the compression rate in each iteration. We have discussed the key implementation issues of our framework in practical networks with representative compression algorithms. Experiments over the popular MNIST and CIFAR-10 datasets confirm that our solution effectively reduces network traffic yet maintains high model accuracy in FL.
翻译:联邦学习联合会(FL)的通信管理费很高,通过压缩模型更新可以大大缓解。然而,网络环境中压缩和模型准确性之间的权衡仍然不明确,而且,为了简单起见,大多数执行项目只采用固定压缩率。在本文件中,我们第一次系统地审查这一权衡,查明压缩错误对最后模型准确性的影响,确定学习率的最后模型准确性。具体地说,我们将每个全球迭代的压缩错误纳入在强烈电流和非电流损失功能下进行的汇合率分析中。然后我们提出一个调整框架,以便通过在每次迭代中从战略上调整压缩率,最大限度地实现最后模型准确性。我们已经在有代表性的压缩算法的实际网络中讨论了我们框架的关键执行问题。对流行的MNIST和CIFAR-10数据集的实验证实,我们的解决方案有效地减少了网络流量,但在FL中保持较高的模型准确性。