This work aims to integrate two learning paradigms Multi-Task Learning (MTL) and meta learning, to bring together the best of both worlds, i.e., simultaneous learning of multiple tasks, an element of MTL and promptly adapting to new tasks with fewer data, a quality of meta learning. We propose Multi-task Meta Learning (MTML), an approach to enhance MTL compared to single task learning by employing meta learning. The fundamental idea of this work is to train a multi-task model, such that when an unseen task is introduced, it can learn in fewer steps whilst offering a performance at least as good as conventional single task learning on the new task or inclusion within the MTL. By conducting various experiments, we demonstrate this paradigm on two datasets and four tasks: NYU-v2 and the taskonomy dataset for which we perform semantic segmentation, depth estimation, surface normal estimation, and edge detection. MTML achieves state-of-the-art results for most of the tasks, and MTL also performs reasonably well for all tasks compared to single task learning.
翻译:这项工作旨在将多种任务学习(MTL)和元学习(MTL)这两个学习范式结合起来,将两个世界的最佳模式结合起来,即同时学习多种任务(MTL的一个要素),同时学习多种任务(MTL的一个要素),并迅速适应数据较少的新任务、元学习的质量。我们提出多种任务元学习(MTML),这是一种通过使用元学习来提高MTL相对于单一任务学习的方法。这项工作的基本想法是培训一种多任务模式,这样当引入一项不可见的任务时,它可以学习较少的步骤,同时在新任务或纳入MTL方面提供至少与常规单一任务学习一样良好的业绩。我们通过进行各种实验,展示了两种数据集和四项任务(NYU-V2)的这种范式,以及我们用来进行语义分解、深度估计、表面正常估计和边缘探测的任务数据集。MTMTL在大多数任务中都取得了最先进的成果,MTL在与单项任务学习相比,对所有任务也表现得相当好。