Unsupervised domain adaptation (UDA) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain. Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels to fully leverage unlabeled target data for model adaptation. However, the generated pseudo labels from the model optimized on the source domain inevitably contain noise due to the domain gap. To tackle this issue, we advance a MetaCorrection framework, where a Domain-aware Meta-learning strategy is devised to benefit Loss Correction (DMLC) for UDA semantic segmentation. In particular, we model the noise distribution of pseudo labels in target domain by introducing a noise transition matrix (NTM) and construct meta data set with domain-invariant source data to guide the estimation of NTM. Through the risk minimization on the meta data set, the optimized NTM thus can correct the noisy issues in pseudo labels and enhance the generalization ability of the model on the target data. Considering the capacity gap between shallow and deep features, we further employ the proposed DMLC strategy to provide matched and compatible supervision signals for different level features, thereby ensuring deep adaptation. Extensive experimental results highlight the effectiveness of our method against existing state-of-the-art methods on three benchmarks.
翻译:未受监督的域适应(UDA)旨在将知识从标签源域域转移到未标签的目标域域; 现有的自培训UDA办法为目标数据指定假标签,并将这些假标签作为地面真象标签,以充分利用无标签目标数据进行模型适应; 然而,源域优化模型生成的假标签不可避免地含有因域间差距造成的噪音; 为了解决这一问题,我们推进一个元校正框架,其中设计了一种多边校正战略,使UDA 语义分解的损益校正(DMLC)受益; 特别是,我们通过引入噪音转换矩阵(NTM),将假标签作为地面真象标签的假标签作为地面真象标签,并用域变量源数据构建元数据集,以指导NTM的估算; 通过尽可能降低元数据集的风险,优化的NTM可以纠正假标签中的噪音问题,并提高目标数据模型的普及能力; 考虑到浅色和深深深层地特征之间的能力差距,我们进一步采用拟议的DLC战略,为目标域域内假标签的噪音分配提供匹配和兼容性监督信号,从而突出地测量现有三级基准。