Massively multi-task learning with large language models has recently made substantial progress on few-shot generalization. However, this is usually performed in a centralized learning fashion, ignoring the privacy sensitivity issue of (annotated) data used in multiple tasks. To mitigate this issue, we propose FewFedWeight, a few-shot federated learning framework across multiple tasks, to achieve the best of both worlds: privacy preservation and cross-task generalization. FewFedWeight trains client models in isolated devices without sharing data. It broadcasts the global model in the server to each client and produces pseudo data for clients so that knowledge from the global model can be explored to enhance few-shot learning of each client model. An energy-based algorithm is further proposed to weight pseudo samples in order to reduce the negative impact of noise from the generated pseudo data. Adaptive model weights of client models are also tuned according to their performance. We use these model weights to dynamically aggregate client models to update the global model. Experiments on 118 NLP tasks show that FewFedWeight can significantly improve the performance of client models on 61% tasks with an average performance improvement rate of 30.5% over the baseline and substantially outperform FedAvg and other decentralized learning methods.
翻译:与大型语言模型进行大规模多任务学习,最近在少许概括化方面取得了显著进展。 但是, 通常以集中学习的方式进行, 忽略了多种任务中使用的( 附加说明的) 数据对隐私的敏感性问题。 为了缓解这一问题, 我们提议使用几个点数的组合式学习框架, 即几个点数的组合式学习框架, 实现两个世界的最佳: 隐私保护和跨任务一般化。 很少FedWeight在不共享数据的情况下以孤立的装置对客户模式进行培训。 它在服务器上向每个客户播放全球模型, 并为客户提供假数据, 以便探索全球模型的知识, 从而增进每个客户模型的少许学习。 为了减轻生成的伪数据所产生的噪音的负面影响, 我们进一步提议采用基于能源的算法。 客户模型的调整权重也根据它们的业绩进行调整。 我们用这些模型权重来动态综合客户模型更新全球模型。 118 NLP任务实验显示, 很少FedWeight能够显著改进客户模型在61%任务上的性能改进61%的性模型的性能, 大大改进了Fedweight 30.5%的进度, 并大大改进了Fedformag 。