Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task. However, the current practice requires additional labeling efforts for the auxiliary task, while not guaranteeing better model performance. In this paper, we find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks. We refer to this joint training as Composite Learning (CompL). Experiments of CompL on monocular depth estimation, semantic segmentation, and boundary detection show consistent performance improvements in fully and partially labeled datasets. Further analysis on depth estimation reveals that joint training with self-supervision outperforms most labeled auxiliary tasks. We also find that CompL can improve model robustness when the models are evaluated in new domains. These results demonstrate the benefits of self-supervision as an auxiliary task, and establish the design of novel task-specific self-supervised methods as a new axis of investigation for future multi-task learning research.
翻译:多任务学习通过联合优化一个辅助任务,有望在目标任务上实现更好的模型化,但目前的做法要求为辅助任务作出更多的标记努力,同时不能保证更好的模型性能。在本文件中,我们发现联合培训密集的预测(目标)任务,进行自我监督(辅助)任务,可以不断改进目标任务的业绩,同时消除标签辅助任务的必要性。我们把这一联合培训称为综合学习(Compite Learning)(CompL),CompL关于单眼深度估计、语义分解和边界探测的实验显示,完全和部分标签数据集的性能不断改进。深入评估进一步分析显示,与自我监督的视觉比大多数标记的辅助任务更完善。我们还发现,CompL在新领域评价模型时,可以提高模型的稳健性。这些结果表明,作为辅助任务的自我监督任务,可以设计新的任务特定自我监督方法,作为未来多任务学习研究调查的新轴线。