Unsupervised Domain Adaptation (UDA) for semantic segmentation has been favorably applied to real-world scenarios in which pixel-level labels are hard to be obtained. In most of the existing UDA methods, all target data are assumed to be introduced simultaneously. Yet, the data are usually presented sequentially in the real world. Moreover, Continual UDA, which deals with more practical scenarios with multiple target domains in the continual learning setting, has not been actively explored. In this light, we propose Continual UDA for semantic segmentation based on a newly designed Expanding Target-specific Memory (ETM) framework. Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting. Furthermore, a proposed Double Hinge Adversarial (DHA) loss leads the network to produce better UDA performance overall. Our design of the TM and training objectives let the semantic segmentation network adapt to the current target domain while preserving the knowledge learned on previous target domains. The model with the proposed framework outperforms other state-of-the-art models in continual learning settings on standard benchmarks such as GTA5, SYNTHIA, CityScapes, IDD, and Cross-City datasets. The source code is available at https://github.com/joonh-kim/ETM.
翻译:用于语义分解的无监督域域适应 (UDA) 用于语义分解的连续 UDA (UDA) 已被积极探索。 由此, 我们提议在新设计的扩展目标特定内存(ETM)框架的基础上, 持续 UDA 用于语义分解 。 我们的新型 ETM 框架包含每个目标领域的特定目标内存(TM ), 以缓解灾难性的遗忘。 此外, 拟议的双向自动交换(DHA) 损失导致网络产生更好的UDA总体性能。 我们设计的TM 和培训目标让语义分解网络适应当前目标域, 同时保留先前目标域的知识。 拟议的框架比其他州/特定内存(ETM ) 框架要优于其他州/ 特定内存(TM ) 。 新的 ETM 框架包含每个目标域的特定内存储(TM TM ), 以缓解灾难性的遗忘记忆。 此外, 拟议的双向自动交换(DHA) 导致网络产生更好的UDA 总体性业绩。 我们设计的TM 和培训目标分解网络的设计让语义网络网络适应当前目标域, 同时保存在先前目标域的知识。 。 。 与先前的目标域 。 模式比其他州/ 数据库/ 数据库/ CSYSYSYSLSYSYSDSDSDSDSA 的系统 标准设置 。