With the goal of directly generalizing trained model to unseen target domains, domain generalization (DG), a newly proposed learning paradigm, has attracted considerable attention. Previous DG models usually require a sufficient quantity of annotated samples from observed source domains during training. In this paper, we relax this requirement about full annotation and investigate semi-supervised domain generalization (SSDG) where only one source domain is fully annotated along with the other domains totally unlabeled in the training process. With the challenges of tackling the domain gap between observed source domains and predicting unseen target domains, we propose a novel deep framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels. Concretely, to predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed. Also, considering inconsistent goals between generalization and pseudo-labeling: former prevents overfitting on all source domains while latter might overfit the unlabeled source domains for high accuracy, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process. When accurate pseudo-labels are generated for unlabeled source domains, the domain mixup operation is applied to augment new domains between labeled and unlabeled domains, which is beneficial for boosting the generalization capability of the model. Extensive results on publicly available DG benchmark datasets show the efficacy of our proposed SSDG method.
翻译:将经过培训的模型直接推广到看不见的目标领域, 域通用( DG) 是一个新提出的学习模式, 吸引了相当多的关注。 以前的 DG 模式通常要求培训期间从观测源域中获取足够数量的附加说明的样本。 在本文件中,我们放松了对充分注解的要求,并调查半监督域通用( SSDG), 即只有一个来源域与在培训过程中完全没有标签的其他域( SSDG ) 完全注解的半监督域通用( SSDG ) 。 由于在解决观测源域和预测未知目标域域之间的域间差距方面面临挑战, 我们建议通过联合域觉标签和双级分类建立一个新的深层次框架, 以产生高质量的伪标签标签标签标签标签, 具体地说, 要预测在域间转换准确的假标签, 一个域间化的假标签假标签假标签假标签标签标签标签, 显示通用域域间没有标签。 通用的域间, 用于未标定的域域域域间, 用于未标定的通用的标签的域域域域域间, 升级的通用的标签的通用的通用的域间将显示通用的通用的域域域间, 。