Learning representations that capture the underlying data generating process is a key problem for data efficient and robust use of neural networks. One key property for robustness which the learned representation should capture and which recently received a lot of attention is described by the notion of invariance. In this work we provide a causal perspective and new algorithm for learning invariant representations. Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization, where we are able to significantly boost the score of existing models.
翻译:获取基本数据生成过程的学习表现是数据效率和强力使用神经网络的一个关键问题。学习表现应当捕捉到的稳健性的关键属性之一,而学习表现应当捕捉到,而且最近受到了很多关注。在这项工作中,我们提供了因果视角和新的算法,用于学习变化表现。我们经常地表明,这一算法在一系列不同的任务上运作良好,特别是我们观察了在领域概括化方面的最先进的表现,我们能够大大提升现有模型的分数。