Traditional multi-task learning architectures train a single model across multiple tasks through a shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the shared parameter updates, which otherwise can lead to negative transfer. A new type of multi-task learning within NLP homogenizes multi-task architectures as a shared encoder and language model decoder, which does surprisingly well across a range of diverse tasks. Does this new architecture suffer from task-conflicts that require specialized training algorithms? We study how certain factors in the shift towards text-to-text models affects multi-task conflict and negative transfer, finding that both directional conflict and transfer are surprisingly constant across architectures.
翻译:传统的多任务学习架构通过共享编码器和语言模式解码器,在多个任务之间培养单一模式。 学习这些模式往往需要专门的培训算法,在共享参数更新中解决任务冲突,否则可能导致负转移。 在NLP中,一种新型的多任务学习方法将多任务架构同质化为共享编码器和语言模式解码器,这在一系列不同任务之间效果惊人。这一新架构是否因任务冲突而受到影响,需要专门的培训算法?我们研究转向文本到文本模型的某些因素如何影响多任务冲突和负转移,发现方向冲突和转移在跨结构之间是惊人的。