A dominant approach for addressing unsupervised domain adaptation is to map data points for the source and the target domains into an embedding space which is modeled as the output-space of a shared deep encoder. The encoder is trained to make the embedding space domain-agnostic to make a source-trained classifier generalizable on the target domain. A secondary mechanism to improve UDA performance further is to make the source domain distribution more compact to improve model generalizability. We demonstrate that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance. We estimate the internally learned multi-modal distribution for the source domain, learned as a result of pretraining, and use it to increase the interclass class separation in the source domain to reduce the effect of domain shift. We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets and compares favorably against exiting methods.
翻译:解决不受监督的域适应问题的主要办法是将源和目标域的数据点绘制成嵌入空间的数据点,以共同深海编码器的输出空间为模型。编码器经过培训,使嵌入的空间域-不可知性成为嵌入目标域的通用分类器。一个进一步改进UDA性能的辅助机制是使源域分布更加紧凑,以提高模型的通用性。我们证明,增加嵌入空间的跨级边距有助于开发一种UDA算法,提高性能。我们估计了通过预修学习而获得的源域内部学习的多模式分布,并利用它来增加源域分类之间的分类分离,以减少域转移的影响。我们证明,使用我们的方法可以改进四个标准基准UDA图像分类数据集的模型通用性,并比照使用的方法更优。