Unsupervised Domain Adaptation (UDA) is a transfer learning task which aims at training on an unlabeled target domain by leveraging a labeled source domain. Beyond the traditional scope of UDA with a single source domain and a single target domain, real-world perception systems face a variety of scenarios to handle, from varying lighting conditions to many cities around the world. In this context, UDAs with several domains increase the challenges with the addition of distribution shifts within the different target domains. This work focuses on a novel framework for learning UDA, continuous UDA, in which models operate on multiple target domains discovered sequentially, without access to previous target domains. We propose MuHDi, for Multi-Head Distillation, a method that solves the catastrophic forgetting problem, inherent in continual learning tasks. MuHDi performs distillation at multiple levels from the previous model as well as an auxiliary target-specialist segmentation head. We report both extensive ablation and experiments on challenging multi-target UDA semantic segmentation benchmarks to validate the proposed learning scheme and architecture.
翻译:无人监督的域适应(UDA)是一项转让学习任务,目的是通过利用一个标签源域,对一个没有标签的目标域进行培训。除了UDA的传统范围外,一个单一源域和一个单一目标域,现实世界的感知系统面临各种需要处理的情景,从不同的照明条件到世界各地的许多城市。在这方面,具有多个域的UDA增加了不同目标域内分布变化的挑战。这项工作侧重于一个学习UDA(连续 UDA)的新框架,在这个框架中,各种模型在连续发现的多个目标域内运行,而没有进入以前的目标域。我们提议采用多头蒸馏方法,以解决持续学习任务中所固有的灾难性的遗忘问题。 MuHDi在多个层次上进行蒸馏,以及作为辅助目标专家分块头。我们报告在挑战多目标UDA 语分解基准方面进行的广泛对比和实验,以验证拟议的学习计划和结构。