Federated learning (FL) has emerged as a new paradigm for privacy-preserving computation in recent years. Unfortunately, FL faces two critical challenges that hinder its actual performance: data distribution heterogeneity and high resource costs brought by large foundation models. Specifically, the non-IID data in different clients make existing FL algorithms hard to converge while the high resource costs, including computational and communication costs that increase the deployment difficulty in real-world scenarios. In this paper, we propose an effective yet simple method, named FedCLIP, to achieve fast generalization and personalization for CLIP in federated learning. Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters. Lightweight adapters can make the most use of pretrained model information and ensure models be adaptive for clients in specific tasks. Simultaneously, small-scale operations can mitigate the computational burden and communication burden caused by large models. Extensive experiments are conducted on three datasets with distribution shifts. Qualitative and quantitative results demonstrate that FedCLIP significantly outperforms other baselines (9% overall improvements on PACS) and effectively reduces computational and communication costs (283x faster than FedAVG). Our code will be available at: https://github.com/microsoft/PersonalizedFL.
翻译:近年来,联邦学习(FL)已成为隐私保护计算的新范例。 不幸的是,FL面临两个阻碍其实际业绩的重大挑战:数据分布差异和大型基础模型带来的高资源成本。具体地说,不同客户的非IID数据使得现有的FL算法难以趋同,而高资源成本,包括计算和通信成本,增加了现实世界情景中的部署难度。在本文件中,我们提出了一个有效的简单方法,名为FedCLIP,以在联合学习中实现CLIP的快速普及和个人化。具体地说,我们为大型模型(CLIP)和其余操作设计了一个基于关注的适应器,仅取决于适应器。轻量级适应器可以最充分地使用预先培训的模型信息,确保模型适应客户的具体任务。同时,小规模操作可以减轻大型模型造成的计算负担和通信负担。在三个数据集上进行了广泛的实验。定性和定量结果显示,FCCLIP大大超越了其他基准(CLIP),其余的操作仅取决于适应器。轻量调的模型信息,并确保模型在特定任务中适应(MACS/AS)中,将有效降低我们现有的基准成本。</s>