As a popular paradigm of distributed learning, personalized federated learning (PFL) allows personalized models to improve generalization ability and robustness by utilizing knowledge from all distributed clients. Most existing PFL algorithms tackle personalization in a model-centric way, such as personalized layer partition, model regularization, and model interpolation, which all fail to take into account the data characteristics of distributed clients. In this paper, we propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients and provides that information to the aggregation model to help with classification tasks. Specifically, in each round of pFedPT training, each client generates a local personalized prompt related to local data distribution. Then, the local model is trained on the input composed of raw data and a visual prompt to learn the distribution information contained in the prompt. During model testing, the aggregated model obtains prior knowledge of the data distributions based on the prompts, which can be seen as an adaptive fine-tuning of the aggregation model to improve model performances on different clients. Furthermore, the visual prompt can be added as an orthogonal method to implement personalization on the client for existing FL methods to boost their performance. Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
翻译:作为分布式学习的流行范例,个性化联邦学习(PFL)使个人化模型能够利用来自所有分布式客户的知识,提高普及能力和稳健性。大多数现有的PFL算法都以模式中心的方式处理个性化问题,如个性化层分割、模式正规化和模型内插法,所有这些都没有考虑到分布式客户的数据特征。在本文中,我们建议为图像分类任务建立一个新的PFL框架,称为PFedPTT,利用个性化视觉提示来暗含客户的本地数据分发信息,并向聚合模型模型提供信息,以帮助分类任务。具体地说,在每轮PFedPTT培训中,每个客户都以模型中心个人化个人化速度处理个人化个人化个人化问题,然后,对本地模型进行培训,其中均没有考虑到分布式客户的数据特征。在模型测试中,综合模型获得基于提示的数据分发方面的事先知识,这可被视为对综合模型的调整性微调,以改进不同客户的模型性能。此外,在FFedPFL的大型客户中,可以将其直观性FAR格式用于个人化方法。</s>