Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks. Existing MTL works mainly focus on the scenario where label sets among multiple tasks (MTs) are usually the same, thus they can be utilized for learning across the tasks. While almost rare works explore the scenario where each task only has a small amount of training samples, and their label sets are just partially overlapped or even not. Learning such MTs is more challenging because of less correlation information available among these tasks. For this, we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partially-overlapped tasks. In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks, the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task, while accompanying a joint learning across individual tasks. Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
翻译:多任务学习是通过转让和利用不同任务之间的共同知识来改进模型的绩效; 现有的MTL工作主要侧重于多种任务(MTs)的标签设置通常相同的情况,因此可以用于跨任务学习。 虽然几乎很少的工作探索每个任务只有少量培训样本的情景,而其标签组只是部分重叠或甚至没有重叠。 学习这些模块更具有挑战性,因为这些任务之间的关联性信息较少。 为此,我们提出一个学习这些任务的框架,方法是共同利用从学习的辅助性大任务中获取的大量信息,包括足够多的课程,以涵盖所有这些任务,以及这些部分重叠的任务之间共享的信息。在我们使用相同已学习的辅助任务神经网络结构来学习个别任务的过程中,关键的想法是利用现有的标签信息,以适应性地利用辅助网络的隐性层神经元,为每项任务构建相应的网络,同时共同学习各项任务。 我们的实验结果表明,与最先进的方法相比,它的有效性。