In Federated Learning (FL), multiple clients collaborate to learn a shared model through a central server while keeping data decentralized. Personalized Federated Learning (PFL) further extends FL by learning a personalized model per client. In both FL and PFL, all clients participate in the training process and their labeled data are used for training. However, in reality, novel clients may wish to join a prediction service after it has been deployed, obtaining predictions for their own \textbf{unlabeled} data. Here, we introduce a new learning setup, On-Demand Unlabeled PFL (OD-PFL), where a system trained on a set of clients, needs to be later applied to novel unlabeled clients at inference time. We propose a novel approach to this problem, ODPFL-HN, which learns to produce a new model for the late-to-the-party client. Specifically, we train an encoder network that learns a representation for a client given its unlabeled data. That client representation is fed to a hypernetwork that generates a personalized model for that client. Evaluated on five benchmark datasets, we find that ODPFL-HN generalizes better than the current FL and PFL methods, especially when the novel client has a large shift from training clients. We also analyzed the generalization error for novel clients, and showed analytically and experimentally how novel clients can apply differential privacy.
翻译:在联邦学习联合会(FL)中,多个客户通过中央服务器合作学习共享模式,同时保持数据分散。个性化联邦学习联合会(PFL)通过每个客户学习个性化模式进一步扩展FL。在FL和PFL中,所有客户都参与培训过程,并使用其标签数据进行培训。然而,在现实中,新客户可能希望在部署后加入预测服务,获得对自身隐私数据的预测。在这里,我们引入一个新的学习设置,On-Demand On-Demand Unlad PFL(OD-PFL),在一组客户中受过培训的系统随后需要适用于新的无标签客户。我们提出了解决这一问题的新办法,即ODPL-HN,它学会为晚至晚方客户提供一个新的模型。具体地说,我们培训一个编码网络,从没有标签的新数据中学习客户的代言语。我们把客户的超额网络用于为该客户创建个人化模型。我们评估了五大客户(特别是FL)的分析方法,我们用了一个更精确的客户、更精确的分析方法。