Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains). Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains. In this paper, we propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation. Under this paradigm, we propose a meta-causal learning method to learn meta-knowledge, that is, how to infer the causes of domain shift between the auxiliary and source domains during training. We use the meta-knowledge to analyze the shift between the target and source domains during testing. Specifically, we perform multiple transformations on source data to generate the auxiliary domain, perform counterfactual inference to learn to discover the causal factors of the shift between the auxiliary and source domains, and incorporate the inferred causality into factor-aware domain alignments. Extensive experiments on several benchmarks of image classification show the effectiveness of our method.
翻译:单域泛化旨在从单个训练域(源域)学习一个模型并将其应用于多个未见过的测试域(目标域)。现有的方法侧重于扩展训练域的分布以覆盖目标域,但未估算源域和目标域之间的域移。本文提出了一种新的学习范式,即模拟-分析-减少,该方法先通过构建一个辅助域作为目标域来模拟域偏移,然后学习分析域偏移的原因,最后学习减少域漂移以进行模型适应。在这种范式下,我们提出了一种元因果学习方法来学习元知识,即如何在训练期间推断出辅助域和源域之间的域移原因。我们使用元知识来分析测试期间目标域和源域之间的偏移。具体而言,我们对源数据执行多个转换以生成辅助域,进行反事实推理以学习发现辅助域和源域之间的因果因素,将推断出的因果关系融入基于因素的域对齐中。在几个图像分类基准测试中的广泛实验表明了我们方法的有效性。