Domain Generalization (DG) is a fundamental challenge for machine learning models, which aims to improve model generalization on various domains. Previous methods focus on generating domain invariant features from various source domains. However, we argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks, which has been largely ignored. Different from learning domain invariant features from source domains, we decouple the input images into Domain Expert Features and noise. The proposed domain expert features lie in a learned latent space where the images in each domain can be classified independently, enabling the implicit use of classification-aware domain variations. Based on the analysis, we proposed a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images and aggregate the source domain expert features for representing the target test domain. We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space. Experiments on the widely-used benchmarks of PACS, VLCS, OfficeHome, DomainNet, and TerraIncognita demonstrate the competitive performance of our method compared to the recently proposed alternatives.
翻译:常规化(DG)是机器学习模型的基本挑战,目的是改进对不同领域的模型概括化。以往的方法侧重于从不同源域生成域异性特征。然而,我们争辩说,对于下游任务,域变式也包含有用的信息,即分解-意识信息,而下游任务基本上被忽视。不同于从源域学习域别异性特征,我们将输入的图像分解为Domain专家特性和噪音。拟议的域专家特征存在于一个学习的隐性空间,每个域的图像可以独立分类,从而可以隐含地使用分类-觉域变异。根据分析,我们提出了一个称为“DDDN”的新范式,将域专家特征与源域图象脱钩,并汇总源域专家特征,以代表目标测试域。我们还提出一种新的对比学习方法,用以指导域专家特性形成一个更加平衡和可比较的特征空间。实验了广泛使用的PACS、VLCS、OfficeHome、DomainNet和TerIncionInnita基准,用以比较我们最近提议的竞争性的替代方法。</s>