Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer architecture consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction (a hypernetwork adapter), we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around 66% of the data for weight updates). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by 2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets. Our code is publicly available at https://github.com/CAMTL/CA-MTL.
翻译:多任务学习(MTL)网络已成为在不同任务中传授知识的一个很有希望的方法。然而,多任务学习(MTL)网络必须应对挑战,例如:过度适应低资源任务、灾难性的遗忘、负任务转移或学习干扰。通常,在自然语言处理(NLP)中,每个任务需要一个单独的模型才能获得最佳业绩。然而,许多微调方法既低参数,即每个任务可能涉及一个新模式,而且极易丢失在培训前阶段获得的知识。我们提议了一个全新的竞争性变换器结构,包括一个新的有条件关注机制以及一套便利权重共享的任务调整模块。通过这一构建(超网络调整器),我们实现更有效的参数共享和减轻对每个任务的不同模型的忘却。我们还使用新的多任务数据取样战略来减轻数据不平衡的消极影响。使用这个方法,我们可以超越单一任务调整方法,同时进行参数和数据效率(使用大约66%的数据进行权重更新 )。我们通过SARBBGLMTL的更大任务,在GLMTFS上使用我们S-TRA的模型和S-BBBBBBBBBBBL 测试其他方法,在GLMTUL中,我们S-BS-BS-BRBRBS-BSBS-BSBS-BS-BS-BS-BS-BS-BS-BS-BS-BS-BSBS-BS-BS-BS-R-BS-R-R-R-R-BS-L-L-BTL-BTL-S-S-S-L-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-L-S-SDL-L-L-L-SDL-L-L-S-S-S-S-S-L-L-L-L-S-S-S-S-S-S-S-S-S-S-S-S-L-S-S-S-S-S-L-S-S-S-S-S