Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance. Typical MTL methods are jointly trained with the complete multitude of ground-truths for all tasks simultaneously. However, one single dataset may not contain the annotations for each task of interest. To address this issue, we propose the Semi-supervised Multi-Task Learning (SemiMTL) method to leverage the available supervisory signals from different datasets, particularly for semantic segmentation and depth estimation tasks. To this end, we design an adversarial learning scheme in our semi-supervised training by leveraging unlabeled data to optimize all the task branches simultaneously and accomplish all tasks across datasets with partial annotations. We further present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets. Finally, we demonstrate the effectiveness of the proposed method to learn across different datasets on challenging street view and remote sensing benchmarks.
翻译:多任务学习(MTL)的目的是通过在相关任务之间分享代表,加强模式的概括化,以便提高绩效。典型的 MTL方法与全部地面真相同时进行共同培训,同时对所有任务进行全部地面真相的培训。然而,单个数据集可能不包含每项有关任务的说明。为解决这一问题,我们建议采用半监督多任务学习(SemimetL)方法,利用不同数据集的现有监督信号,特别是语义分割和深度估计任务。为此,我们在半监督培训中设计了一个对抗性学习计划,利用未标注的数据同时优化所有任务分支,完成所有跨数据集的任务,并部分说明。我们进一步展示一个域觉察差异结构,并配以各种组合,以缓解数据集之间的领域差异问题。最后,我们展示了拟议方法在挑战性街道视图和遥感基准的不同数据集中学习的有效性。