In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.
翻译:在这项工作中,我们提出一个通信效率参数化,即FedPara,用于联合学习(FL),以克服频繁的模型上传和下载的负担。我们的方法是用低级别重量重新校准层的重量参数,然后采用Hadamard产品。与传统的低级别参数化相比,我们的FedPara方法不限于低级别限制,因此其容量要大得多。这个属性可以实现可比的性能,而需要比原层次的模型低3到10倍的通信成本,而原层次是传统的低级别方法所无法实现的。我们的方法效率可以通过与其他高效的FL优化器相结合得到进一步提高。此外,我们将我们的方法推广到个性化的FL应用程序,pFedPara,它将参数分为全球和地方参数。我们显示,pFedPara比个人化的FL方法要高三倍多,比参数少三倍多。