Federated learning is an emerging learning paradigm where multiple clients collaboratively train a machine learning model in a privacy-preserving manner. Personalized federated learning extends this paradigm to overcome heterogeneity across clients by learning personalized models. Recently, there have been some initial attempts to apply Transformers to federated learning. However, the impacts of federated learning algorithms on self-attention have not yet been studied. This paper investigates this relationship and reveals that federated averaging algorithms actually have a negative impact on self-attention where there is data heterogeneity. These impacts limit the capabilities of the Transformer model in federated learning settings. Based on this, we propose FedTP, a novel Transformer-based federated learning framework that learns personalized self-attention for each client while aggregating the other parameters among the clients. Instead of using a vanilla personalization mechanism that maintains personalized self-attention layers of each client locally, we develop a learn-to-personalize mechanism to further encourage the cooperation among clients and to increase the scablability and generalization of FedTP. Specifically, the learn-to-personalize is realized by learning a hypernetwork on the server that outputs the personalized projection matrices of self-attention layers to generate client-wise queries, keys and values. Furthermore, we present the generalization bound for FedTP with the learn-to-personalize mechanism. Notably, FedTP offers a convenient environment for performing a range of image and language tasks using the same federated network architecture - all of which benefit from Transformer personalization. Extensive experiments verify that FedTP with the learn-to-personalize mechanism yields state-of-the-art performance in non-IID scenarios. Our code is available online.
翻译:联邦学习是一种新兴的学习模式,在这个模式中,多个客户以保密的方式合作培训机器学习模式。个性化联邦学习扩展了这一模式,通过学习个性化模式克服客户之间的异质性。最近,一些初始尝试尝试将变换器应用到联合会学习。然而,联邦学习算法对自我关注的影响尚未研究。本文调查了这种关系,并揭示了联邦平均算法在存在数据差异的地方实际上对自我保护产生了负面影响。这些影响限制了变换器模型在联邦化学习环境中的能力。基于这一模式,我们建议采用基于变换器的非联邦化学习框架,在综合客户的其他参数的同时,学习每个客户的个人自我保护。我们开发了一种维持每个客户个性化自我保护层的香味个人个人化算法机制,我们开发了一种相互学习机制,以进一步鼓励客户之间的合作,提高变现变现变现的变现和变现的变现模式的能力。具体地,我们提出个人变现的变现式服务器,通过我们变现的变现机式服务器,学习了我们变现的自我变现的变现,我们变现的变现的变现机式机的变换机式服务器,我们变换机的变换机的变换机的变换机能的变换机的机的机变换机能的机能的机能。