Multi-task learning (MTL) aims at enhancing the performance and efficiency of machine learning models by training them on multiple tasks simultaneously. However, MTL research faces two challenges: 1) modeling the relationships between tasks to effectively share knowledge between them, and 2) jointly learning task-specific and shared knowledge. In this paper, we present a novel model Adaptive Task-to-Task Fusion Network (AdaTT) to address both challenges. AdaTT is a deep fusion network built with task specific and optional shared fusion units at multiple levels. By leveraging a residual mechanism and gating mechanism for task-to-task fusion, these units adaptively learn shared knowledge and task specific knowledge. To evaluate the performance of AdaTT, we conduct experiments on a public benchmark and an industrial recommendation dataset using various task groups. Results demonstrate AdaTT can significantly outperform existing state-of-the-art baselines.
翻译:多任务学习(MTL)旨在通过同时在多个任务上训练机器学习模型来提高性能和效率。但是,MTL研究面临两个挑战:1)建立任务之间的关系以有效地共享知识;2)联合学习任务特定和共享知识。在本文中,我们提出了一种新颖的模型——自适应任务融合网络(AdaTT),以应对这两个挑战。AdaTT是一个深度融合网络,具有任务特定和可选共享融合单元在多个层次上。通过利用任务到任务融合的残差机制和门控机制,这些单元自适应地学习共享知识和任务特定知识。为了评估AdaTT的性能,我们在公共基准数据集和工业推荐数据集上使用不同的任务组进行实验。结果表明,AdaTT可以显著优于现有的最先进基线模型。