Federated Learning (FL) aims to learn a single global model that enables the central server to help the model training in local clients without accessing their local data. The key challenge of FL is the heterogeneity of local data in different clients, such as heterogeneous label distribution and feature shift, which could lead to significant performance degradation of the learned models. Although many studies have been proposed to address the heterogeneous label distribution problem, few studies attempt to explore the feature shift issue. To address this issue, we propose a simple yet effective algorithm, namely \textbf{p}ersonalized \textbf{Fed}erated learning with \textbf{L}ocal \textbf{A}ttention (pFedLA), by incorporating the attention mechanism into personalized models of clients while keeping the attention blocks client-specific. Specifically, two modules are proposed in pFedLA, i.e., the personalized single attention module and the personalized hybrid attention module. In addition, the proposed pFedLA method is quite flexible and general as it can be incorporated into any FL method to improve their performance without introducing additional communication costs. Extensive experiments demonstrate that the proposed pFedLA method can boost the performance of state-of-the-art FL methods on different tasks such as image classification and object detection tasks.
翻译:留存学习(FL)的目标是学习一个单一的全局模型,使得中央服务器能够帮助本地客户端进行模型训练,而不需要访问它们的本地数据。FL的关键挑战是不同客户端本地数据的异质性,例如异质的标签分布和特征偏移,这可能导致学习模型的显著性能降低。虽然已经提出了许多用于解决异质标签分布问题的研究,但很少有研究尝试探索特征偏移问题。为了解决这个问题,我们提出了一种简单而有效的算法,即个性化的留存学习与本地注意力(pFedLA),将注意机制纳入个性化模型中,同时保持注重块是针对客户特定的。具体而言,pFedLA提出了两个模块,即个性化的单一注意模块和个性化的混合注意模块。此外,所提出的pFedLA方法相当灵活和通用,因为它可以被纳入任何FL方法中,以提高其性能,而不引入额外的通信成本。广泛的实验表明,所提出的pFedLA方法可以提高各种任务(例如图像分类和物体检测任务)的最先进的FL方法的性能。