Deep supervised models have an unprecedented capacity to absorb large quantities of training data. Hence, training on many datasets becomes a method of choice towards graceful degradation in unusual scenes. Unfortunately, different datasets often use incompatible labels. For instance, the Cityscapes road class subsumes all driving surfaces, while Vistas defines separate classes for road markings, manholes etc. We address this challenge by proposing a principled method for seamless learning on datasets with overlapping classes based on partial labels and probabilistic loss. Our method achieves competitive within-dataset and cross-dataset generalization, as well as ability to learn visual concepts which are not separately labeled in any of the training datasets. Experiments reveal competitive or state-of-the-art performance on two multi-domain dataset collections and on the WildDash 2 benchmark.
翻译:深层监督模型具有吸收大量培训数据的前所未有的能力。 因此, 许多数据集培训成为在异常场景中优雅降解的一种选择方法。 不幸的是, 不同的数据集往往使用不兼容的标签。 例如, 城市景路类分包了所有驾驶表层, Vistas 则为道路标识、 孔洞等分别确定了类别。 我们通过提出一个原则性方法来应对这一挑战, 以便根据部分标签和概率损失来无缝地学习以重叠分类的数据集。 我们的方法实现了竞争性的内置数据集和交叉数据集通用化,以及学习在任何培训数据集中没有单独标签的视觉概念的能力。 实验揭示了两个多域数据集集和野地达什 2 基准的竞争性或最新性表现。