Despite the recent advances in multi-task learning of dense prediction problems, most methods rely on expensive labelled datasets. In this paper, we present a label efficient approach and look at jointly learning of multiple dense prediction tasks on partially annotated data (i.e. not all the task labels are available for each image), which we call multi-task partially-supervised learning. We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated. In particular, we learn to map each task pair to a joint pairwise task-space which enables sharing information between them in a computationally efficient way through another network conditioned on task pairs, and avoids learning trivial cross-task relations by retaining high-level information about the input image. We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
翻译:尽管最近在多任务学习密集的预测问题方面取得了进展,但大多数方法都依赖昂贵的标签数据集。在本文中,我们提出了一个标签高效的方法,并共同研究在部分附加说明的数据(即不是每个图像都有所有任务标签)上学习多重密集的预测任务,我们称之为多任务部分监督的学习。我们提议了一个多任务培训程序,在数据部分附加说明时成功地利用任务关系来监督其多任务学习。特别是,我们学会将每对任务绘制成一个双对工作空间,通过另一个以任务配对为条件的网络以计算高效的方式分享它们之间的信息,并避免通过保留关于投入图像的高层次信息来学习微不足道的跨任务关系。我们严格地证明,我们所提议的方法有效地利用了未加标记的任务的图像,并超越了现有半监督的学习方法和三个标准基准的相关方法。