In this paper we present a solution to the task of "unsupervised domain adaptation (UDA) of a given pre-trained semantic segmentation model without relying on any source domain representations". Previous UDA approaches for semantic segmentation either employed simultaneous training of the model in the source and target domains, or they relied on an additional network, replaying source domain knowledge to the model during adaptation. In contrast, we present our novel Unsupervised BatchNorm Adaptation (UBNA) method, which adapts a given pre-trained model to an unseen target domain without using -- beyond the existing model parameters from pre-training -- any source domain representations (neither data, nor networks) and which can also be applied in an online setting or using just a few unlabeled images from the target domain in a few-shot manner. Specifically, we partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor, thereby mixing the statistics from both domains. By evaluation on standard UDA benchmarks for semantic segmentation we show that this is superior to a model without adaptation and to baseline approaches using statistics from the target domain only. Compared to standard UDA approaches we report a trade-off between performance and usage of source domain representations.
翻译:在本文中,我们提出了一个解决方案,以解决“未经监督的预先训练的语义分解模型(UDA)的“未经监督的域内调整(UDA),不依赖任何源域表示 ” 的任务。以前,UDA的语义分解方法,要么在源域和目标域同时使用模型培训,要么在源域和目标域内同时使用该模型,或者依赖额外的网络,将源域知识与模型重放到适应期间的模型。相比之下,我们提出了我们的新颖的未经监督的BatchNorm 适应(UBNA)方法,该方法将特定预先训练的模型调整到一个无形的目标域,不使用培训前现有模型参数以外的任何源域表示(无论是数据还是网络),也可以在网上设置中应用,或者只是使用目标域内的一些未加标记的图像。具体地说,我们用一个指数腐蚀的动力系数将正常层统计调整到目标域域,从而将两个领域的统计混合起来。通过对UBNA标准的语义分解标准基准的评价,我们发现,这比模型优越于没有适应的模型,而只能使用目标域域域域域域域域内的统计数据和基准方法。