Due to spurious correlations, machine learning systems often fail to generalize to environments whose distributions differ from the ones used at training time. Prior work addressing this, either explicitly or implicitly, attempted to find a data representation that has an invariant causal relationship with the target. This is done by leveraging a diverse set of training environments to reduce the effect of spurious features and build an invariant predictor. However, these methods have generalization guarantees only when both data representation and classifiers come from a linear model class. We propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution (OOD) generalization in the nonlinear setting (i.e., nonlinear representations and nonlinear classifiers). It builds upon a practical and general assumption: the prior over the data representation factorizes when conditioning on the target and the environment. Based on this, we show identifiability of the data representation up to very simple transformations. We also prove that all direct causes of the target can be fully discovered, which further enables us to obtain generalization guarantees in the nonlinear setting. Extensive experiments on both synthetic and real-world datasets show that our approach significantly outperforms a variety of baseline methods. Finally, in the concluding discussion, we further explore the aforementioned assumption and propose a general view, called the Agnostic Hypothesis: there exist a set of hidden causal factors affecting both inputs and outcomes. The Agnostic Hypothesis can provide a unifying view of machine learning in terms of representation learning. More importantly, it can inspire a new direction to explore the general theory for identifying hidden causal factors, which is key to enabling the OOD generalization guarantees in machine learning.
翻译:由于虚假的关联,机器学习系统往往无法概括到分布不同于培训时所用分布的环境。之前的工作是明确或隐含地试图找到一个与目标有内在因果关系的数据表示方式。这是通过利用一系列不同的培训环境来减少虚假特征的影响并建立一个不固定的预测器来完成的。然而,这些方法只有在数据表示方式和分类方式来自线性模型类时才具有概括性保障。我们提议了Invilant Causal 表示方式(ICRL),这是一个学习模式,使得在非线性设置中能够将数据分配(OOOD)的概括化(OOD),试图在非线性设置中找到一个与目标有内在因果关系的数据表示方式。它基于一个实际和一般假设:在调整目标和环境时,数据表示的先前代表因素在数据代表方式和分类中,我们显示数据表示的可识别性到非常简单的转变。我们还证明,所有目标的直接原因都可以完全被发现,这进一步使我们能够在非线性配置(OOOOOO) 上获得一般的概括化(OOOOO) 总体数据解释方式的概括性保证,在非线性假设中进行更深入的深度的深度的演化的演算中,最后的模拟演算中,我们所要显示的模拟的模拟的模拟的演算的演算的演算,最后的演算的演算中,在我们最后的演算中,一个总总的演算的演算的演的精。