To overcome the burdens on frequent model uploads and downloads during federated learning (FL), we propose a communication-efficient re-parameterization, FedPara. Our method re-parameterizes the model's layers using low-rank matrices or tensors followed by the Hadamard product. Different from the conventional low-rank parameterization, our method is not limited to low-rank constraints. Thereby, our FedPara has a larger capacity than the low-rank one, even with the same number of parameters. It can achieve comparable performance to the original models while requiring 2.8 to 10.1 times lower communication costs than the original models, which is not achievable by the traditional low-rank parameterization. Moreover, the efficiency can be further improved by combining our method and other efficient FL techniques because our method is compatible with others. We also extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.
翻译:为了克服联邦学习(FL)期间频繁模式上传和下载的负担,我们建议采用通信高效的重新校准法,即FedPara。我们的方法是用低级别矩阵或加压来重新校准模型层,然后采用哈达马德产品。不同于传统的低级别参数化,我们的方法并不局限于低级别限制。因此,我们的FedPara的容量大于低级别模型,即使参数数量相同。它可以达到与原模型的可比性能,而通信成本比原模型低2.8至10.1倍,而原模型则需要2.8至10.1倍的通信成本,而传统的低级别参数化是无法实现的。此外,通过将我们的方法和其他高效FL技术结合起来,效率可以进一步提高,因为我们的方法与其他方法兼容。我们还将我们的方法扩大到个化的FL应用程序,pFedPara,它将参数分为全球和本地的参数。我们显示,pFedPara比个人化的FL方法要短三倍以上。