In this work, we define and address a novel domain adaptation (DA) problem in semantic scene segmentation, where the target domain not only exhibits a data distribution shift w.r.t. the source domain, but also includes novel classes that do not exist in the latter. Different to "open-set" and "universal domain adaptation", which both regard all objects from new classes as "unknown", we aim at explicit test-time prediction for these new classes. To reach this goal, we propose a framework that leverages domain adaptation and zero-shot learning techniques to enable "boundless" adaptation in the target domain. It relies on a novel architecture, along with a dedicated learning scheme, to bridge the source-target domain gap while learning how to map new classes' labels to relevant visual representations. The performance is further improved using self-training on target-domain pseudo-labels. For validation, we consider different domain adaptation set-ups, namely synthetic-2-real, country-2-country and dataset-2-dataset. Our framework outperforms the baselines by significant margins, setting competitive standards on all benchmarks for the new task. Code and models are available at https://github.com/valeoai/buda.
翻译:在这项工作中,我们定义和处理语义场景分割中一种新的领域适应(DA)问题,在这个领域,目标领域不仅展示出数据分布变化(w.r.t.)源域,而且还包括后者中不存在的新类别。不同于“开放设置”和“通用域适应”,两者都把新类别的所有对象都视为“未知”,我们的目标是对这些新类别进行明确的测试-时间预测。为了实现这一目标,我们提议了一个框架,利用域适应和零光学习技术使目标域“无限”适应成为“目标域”。它依靠一个新结构,加上一个专门的学习计划,在学习如何绘制新类标签与相关视觉显示相匹配的同时,弥合源目标域差距。我们利用目标-面假标签的自我培训进一步改进了绩效。为了验证,我们考虑不同领域的适应设置,即合成-2-真实、国家-2国家和数据集-2-数据集-数据集。我们的框架通过显著的边距超越基线,为所有新任务设定了竞争标准。守则和模型可在 http://gievar/combe.