Domain generalization aims to learn knowledge invariant across different distributions while semantically meaningful for downstream tasks from multiple source domains, to improve the model's generalization ability on unseen target domains. The fundamental objective is to understand the underlying "invariance" behind these observational distributions and such invariance has been shown to have a close connection to causality. While many existing approaches make use of the property that causal features are invariant across domains, we consider the causal invariance of the average causal effect of the features to the labels. This invariance regularizes our training approach in which interventions are performed on features to enforce stability of the causal prediction by the classifier across domains. Our work thus sheds some light on the domain generalization problem by introducing invariance of the mechanisms into the learning process. Experiments on several benchmark datasets demonstrate the performance of the proposed method against SOTAs.
翻译:广域化的目的是学习不同分布之间的差异性知识,而对于多个源域的下游任务则具有内在意义,从而改进模型在无形目标域的概括性能力;基本目标是了解这些观测分布背后的“差异性”,这种差异性已证明与因果关系密切相关。虽然许多现有方法利用了因果特性在不同领域之间变化无异的属性,但我们认为这些特性的特性的平均因果关系的因果性因果性。这种差异性规范了我们的培训方法,即对不同领域分类者执行因果关系预测稳定性特征的干预。因此,我们的工作通过在学习过程中引入机制的差异性,为域性普遍性问题提供了一些说明。在几个基准数据集上进行的实验显示了针对SOTAs的拟议方法的性能。