Unsupervised domain adaptation (DA) has gained substantial interest in semantic segmentation. However, almost all prior arts assume concurrent access to both labeled source and unlabeled target, making them unsuitable for scenarios demanding source-free adaptation. In this work, we enable source-free DA by partitioning the task into two: a) source-only domain generalization and b) source-free target adaptation. Towards the former, we provide theoretical insights to develop a multi-head framework trained with a virtually extended multi-source dataset, aiming to balance generalization and specificity. Towards the latter, we utilize the multi-head framework to extract reliable target pseudo-labels for self-training. Additionally, we introduce a novel conditional prior-enforcing auto-encoder that discourages spatial irregularities, thereby enhancing the pseudo-label quality. Experiments on the standard GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes benchmarks show our superiority even against the non-source-free prior-arts. Further, we show our compatibility with online adaptation enabling deployment in a sequentially changing environment.
翻译:不受监督的域适应(DA)在语义分割方面引起了极大的兴趣,然而,几乎所有前科艺术都同时使用标签源和未标签目标,使其不适合要求无源适应的假想情况。在这项工作中,我们通过将任务分为两个部分,使无源源的域适应(ADA)成为无源的DA。对于前者,我们提供了理论见解,以开发一个多头框架,经过培训的多源数据集几乎是扩展的多源数据集,目的是平衡一般化和特殊性。对于后者,我们利用多头框架来提取可靠的假标签用于自我培训。此外,我们引入了一个新的有条件的、事先设定自动编码的自动编码,防止空间异常,从而提高假标签的质量。关于标准GTA5-Citycovers和SYNTHIA-Citycorations基准的实验表明,我们甚至优于非源的前科。此外,我们展示了与在线适应的兼容性,从而能够在不断变化的环境中部署。