It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
翻译:已经充分证明,深层网络在从某个标签的(来源)数据集中提取特征方面是有效的,然而,它们并非总能把特征广泛归纳到通常具有不同基本分布的其他(目标)数据集中。在本报告中,我们评估了用于图像分类任务的四种不同领域适应技术:深COORAL、深 DepDomacain Convention、CDAN和CDAN+E。这些技术是不受监督的,因为目标数据集在培训阶段没有任何标签。我们评估了办公室-31数据集的模型性能。这里可以找到与本报告 Github 库的链接: https://github.com/agrija9/Dep-Unurvived-Domain-Adaptation。