Unsupervised domain adaptation (UDA) is an important topic in the computer vision community. The key difficulty lies in defining a common property between the source and target domains so that the source-domain features can align with the target-domain semantics. In this paper, we present a simple and effective mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP) that constrains the features extracted from source and target domains to align with a domain-agnostic space. In practice, this is easily implemented as an extra loss term that requires a little extra costs. In the standard evaluation protocol of transferring synthesized data to real data, we validate the effectiveness of different types of DAP, especially that borrowed from a text embedding model that shows favorable performance beyond the state-of-the-art UDA approaches in terms of segmentation accuracy. Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
翻译:不受监督的域适应(UDA)是计算机视觉界的一个重要议题。关键困难在于界定源和目标领域之间的共同属性,以便源域特性与目标主语义相一致。在本文中,我们提出了一个简单而有效的机制,规范跨域代表性学习,事先采用域名化(DAP),限制从源和目标领域提取的特征,使之与域名化空间保持一致。在实际中,这很容易作为一个额外损失术语执行,这需要一些额外的费用。在将综合数据转换为真实数据的标准评估协议中,我们验证了不同类型DAP的有效性,特别是从一个嵌入模型的文本中借用的,该模型显示在分化精度方面超越了最先进的UDA方法的有利性。我们的研究表明,UDA从更好的准数据(可能来自其他数据模式)中获益良多。