Decentralized federated learning (DFL) is a variant of federated learning, where edge nodes only communicate with their one-hop neighbors to learn the optimal model. However, as information exchange is restricted in a range of one-hop in DFL, inefficient information exchange leads to more communication rounds to reach the targeted training loss. This greatly reduces the communication efficiency. In this paper, we propose a new non-uniform quantization of model parameters to improve DFL convergence. Specifically, we apply the Lloyd-Max algorithm to DFL (LM-DFL) first to minimize the quantization distortion by adjusting the quantization levels adaptively. Convergence guarantee of LM-DFL is established without convex loss assumption. Based on LM-DFL, we then propose a new doubly-adaptive DFL, which jointly considers the ascending number of quantization levels to reduce the amount of communicated information in the training and adapts the quantization levels for non-uniform gradient distributions. Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of LM-DFL with the optimal quantized distortion and show that doubly-adaptive DFL can greatly improve communication efficiency.
翻译:联邦分权化学习(DFL)是联邦学习的一种变体,边节点只与其一站邻居交流,以便学习最佳模式。然而,由于DFL中信息交流受到限制,DFL中只有一站一站的空位,信息交流效率低下导致更多的沟通回合,以达到有针对性的培训损失。这大大降低了沟通效率。在本文中,我们提议对模型参数进行新的非单式量化,以改进DFL的融合。具体地说,我们将劳埃德-最大算法应用到DFL(LM-DFL),首先通过调整适应性定量水平来尽量减少四分化扭曲。LM-DFL(LM-DFL-DFL)的连接保证是在不假定Convex损失的情况下建立的。根据LM-DFL(L),我们然后提出一个新的双向双向DFL(DFL),它共同考虑在培训中传递信息的数量上升,并调整非单式梯度分布的四分级水平。根据MNIST和CIFAR-10数据设置的实验结果,说明LM-DFDFL的优势,可以大大改进最佳的通信。</s>