Personalized federated learning is tasked with training machine learning models for multiple clients, each with its own data distribution. The goal is to train personalized models in a collaborative way while accounting for data disparities across clients and reducing communication costs. We propose a novel approach to this problem using hypernetworks, termed pFedHN for personalized Federated HyperNetworks. In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client. This architecture provides effective parameter sharing across clients, while maintaining the capacity to generate unique and diverse personal models. Furthermore, since hypernetwork parameters are never transmitted, this approach decouples the communication cost from the trainable model size. We test pFedHN empirically in several personalized federated learning challenges and find that it outperforms previous methods. Finally, since hypernetworks share information across clients we show that pFedHN can generalize better to new clients whose distributions differ from any client observed during training.
翻译:个人化联谊学习的任务是为多个客户培训机器学习模式,每个客户都有自己的数据分布。目标是以协作方式培训个性化模式,同时考虑不同客户的数据差异并降低通信成本。我们提出使用超网络解决这一问题的新办法,称为PFedHN(个人化的FedHN),在这种方法中,对中央超网络模式进行了培训,以生成一套模型,每个客户都有一种模型。这一架构为客户提供了有效的共享参数,同时保持了生成独特和多样个人模型的能力。此外,由于超网络参数从未被传输,这一方法将通信成本与可培训模式大小脱钩。我们用一些个人化的FedHN经验测试了几个个人化的联邦化学习挑战,发现它优于以往的方法。最后,由于超网络在客户之间共享信息,我们显示PFedHN可以更好地向新客户概括,这些客户的分布与培训期间观察到的任何客户不同。