Deep learning is usually data starved, and the unsupervised domain adaptation (UDA) is developed to introduce the knowledge in the labeled source domain to the unlabeled target domain. Recently, deep self-training presents a powerful means for UDA, involving an iterative process of predicting the target domain and then taking the confident predictions as hard pseudo-labels for retraining. However, the pseudo-labels are usually unreliable, thus easily leading to deviated solutions with propagated errors. In this paper, we resort to the energy-based model and constrain the training of the unlabeled target sample with an energy function minimization objective. It can be achieved via a simple additional regularization or an energy-based loss. This framework allows us to gain the benefits of the energy-based model, while retaining strong discriminative performance following a plug-and-play fashion. The convergence property and its connection with classification expectation minimization are investigated. We deliver extensive experiments on the most popular and large-scale UDA benchmarks of image classification as well as semantic segmentation to demonstrate its generality and effectiveness.
翻译:深层学习通常缺乏数据,而未受监督的域适应(UDA)开发的目的是将知识引入标签源域中未加标签的目标域。最近,深层自我培训为UDA提供了强有力的手段,包括反复预测目标域的过程,然后将自信的预测作为硬假标签进行再培训。然而,伪标签通常不可靠,因此很容易导致传播错误的偏差解决方案。在本文中,我们采用基于能源的模式,并限制对未标目标样本的培训,使其具有尽量减少能源功能的目标。它可以通过简单的额外规范化或以能源为基础的损失来实现。这个框架使我们能够从基于能源的模式中获益,同时在插接和玩时保持强有力的歧视性表现。正在调查聚合属性及其与分类预期最小化的联系。我们就最受欢迎和大规模UDA图像分类基准以及语义分化进行广泛的实验,以展示其普遍性和有效性。