In unsupervised domain adaptation (UDA), directly adapting from the source to the target domain usually suffers significant discrepancies and leads to insufficient alignment. Thus, many UDA works attempt to vanish the domain gap gradually and softly via various intermediate spaces, dubbed domain bridging (DB). However, for dense prediction tasks such as domain adaptive semantic segmentation (DASS), existing solutions have mostly relied on rough style transfer and how to elegantly bridge domains is still under-explored. In this work, we resort to data mixing to establish a deliberated domain bridging (DDB) for DASS, through which the joint distributions of source and target domains are aligned and interacted with each in the intermediate space. At the heart of DDB lies a dual-path domain bridging step for generating two intermediate domains using the coarse-wise and the fine-wise data mixing techniques, alongside a cross-path knowledge distillation step for taking two complementary models trained on generated intermediate samples as 'teachers' to develop a superior 'student' in a multi-teacher distillation manner. These two optimization steps work in an alternating way and reinforce each other to give rise to DDB with strong adaptation power. Extensive experiments on adaptive segmentation tasks with different settings demonstrate that our DDB significantly outperforms state-of-the-art methods. Code is available at https://github.com/xiaoachen98/DDB.git.
翻译:在未经监督的域适应(UDA)中,直接从源到目标域的适应(UDA)通常存在巨大的差异,导致不完全的调整。因此,许多UDA试图通过各种中间空间,即所谓的域桥(DB),逐渐和软地消除域间差距。然而,对于密集的预测任务,如域适应语义分解(DASS),现有解决方案主要依靠粗略风格的传输和如何优雅地连接域,目前仍在探索之中。在这项工作中,我们利用数据混合数据为DASS建立一个深思熟虑的域间接合(DDB),通过这种连接将源和目标域的共享分布与中间空间的每个区域相匹配和互动。DDDDB的核心是一个双向域间连接步骤,利用粗粗略和精细的数据混合技术生成两个中间域,同时采用交叉路路知识蒸馏步骤,将生成的中间样品作为“教师”在多教师蒸馏中开发高级的“学习”。这两个步骤以交替方式调整工作,并相互加强彼此之间的步骤,使DB/DDDDDB系统在显著的变换为DB。