Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer architecture consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction, we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around 66% of the data for weight updates). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by 2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets. Our code is publicly available at https://github.com/CAMTL/CA-MTL.
翻译:多任务学习(MTL)网络已成为在不同任务中传授知识的一个很有希望的方法。然而,多任务学习(MTL)网络必须应对挑战,例如:过度适应低资源任务、灾难性的遗忘、负任务转移或学习干扰。通常,在自然语言处理(NLP)中,每个任务需要一个单独的模型才能取得最佳业绩。然而,许多微调方法都是参数效率低下的,即:每个任务可能涉及一个新模式,并且极易丢失在培训前获得的知识。我们提议了一个全新的变换器结构,包括一个新的有条件关注机制以及一套促进权重共享的任务调整模块。通过这一构建,我们实现了更有效的参数共享,并通过固定预先培训模式的一半重量来减轻忘却。我们还使用新的多任务数据抽样战略来减轻任务之间数据不平衡的消极影响。使用这一方法,我们可以超越单一任务调整方法,同时提高参数和数据的效率(用大约66 %的国家数据来更新重量 )。在GLUE(BER)的更大模型中共享参数,我们在28-CAMAMTL的模型中使用了我们28-S-S-SBSBBL的模型, ASyal-rodudal-roduforma)多任务中,我们24个的模型和Syal-rodustryal-Syal-rodustryal-rodustryal-laxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx