Unsupervised Domain Adaptation (UDA) is known to trade a model's performance on a source domain for improving its performance on a target domain. To resolve the issue, Unsupervised Domain Expansion (UDE) has been proposed recently to adapt the model for the target domain as UDA does, and in the meantime maintain its performance on the source domain. For both UDA and UDE, a model tailored to a given domain, let it be the source or the target domain, is assumed to well handle samples from the given domain. We question the assumption by reporting the existence of cross-domain visual ambiguity: Due to the lack of a crystally clear boundary between the two domains, samples from one domain can be visually close to the other domain. We exploit this finding and accordingly propose in this paper Co-Teaching (CT) that consists of knowledge distillation based CT (kdCT) and mixup based CT (miCT). Specifically, kdCT transfers knowledge from a leader-teacher network and an assistant-teacher network to a student network, so the cross-domain visual ambiguity will be better handled by the student. Meanwhile, miCT further enhances the generalization ability of the student. Comprehensive experiments on two image-classification benchmarks and two driving-scene-segmentation benchmarks justify the viability of the proposed method.
翻译:无人监督的域适应(UDA) 以在源域上交换模型的性能来改进其在目标域的性能为改进性能。为了解决这个问题,最近提出了无监督的域扩展(UDE) 以像UDA那样调整目标域的模型,同时保持其在源域的性能。UDA 和 UDE 的性能都是众所周知的。对于UDA 和 UDE 来说,一个适合特定域的模型,让它成为源域或目标域,假定它能够很好地处理特定域的样品。我们质疑这种假设,因为报告存在跨域的视觉模糊性:由于两个域之间缺乏清晰的界限,一个域的样本可以直观接近另一个域。我们利用这一发现,并因此在本文中提出了共同教学(CT), 包括基于CT (kdCT) 和基于CT(miCT) 的混合(MCT) 的知识提法。 具体地说, kdCT 将知识从领导教师网络和助理- 教师网络向学生网络转让知识,因此交叉的视觉模糊性模糊性将更好地由学生们掌握两个基础能力,从而更好地处理。同时推进学生两个基本标准。