In recent years, Multi-Task Learning (MTL) has attracted much attention due to its good performance in many applications. However, many existing MTL models cannot guarantee that their performance is no worse than their single-task counterparts on each task. Though some works have empirically observed this phenomenon, little work aims to handle the resulting problem. In this paper, we formally define this phenomenon as negative sharing and define safe multi-task learning where no negative sharing occurs. To achieve safe multi-task learning, we propose a Deep Safe Multi-Task Learning (DSMTL) model with two learning strategies: individual learning and joint learning. We theoretically study the safeness of both learning strategies in the DSMTL model to show that the proposed methods can achieve some versions of safe multi-task learning. Moreover, to improve the scalability of the DSMTL model, we propose an extension, which automatically learns a compact architecture and empirically achieves safe multi-task learning. Extensive experiments on benchmark datasets verify the safeness of the proposed methods.
翻译:近年来,多任务学习(MTL)在许多应用中表现良好,因此引起了人们的极大关注。然而,许多现有的MTL模式不能保证其业绩不比每项任务中的单一任务对口单位差。虽然有些工作从经验上观察了这一现象,但没有多少工作旨在处理由此产生的问题。在本文件中,我们正式将这一现象定义为消极分享,并定义没有负面分享的安全的多任务学习。为了实现安全的多任务学习,我们提议采用一个包含两个学习战略的深安全多任务学习模式:个人学习和共同学习。我们理论上研究DSMTL模式中两种学习战略的安全性,以表明拟议的方法能够达到某种安全的多任务学习模式。此外,为了改进DSMTL模式的可扩展性,我们提议扩大一个可自动学习一个紧凑架构和以经验方式实现安全多任务学习的扩展。关于基准数据集的广泛实验可以验证拟议方法的安全性。