Invariance principle-based methods, for example, Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG). Despite the promising theory, invariance principle-based approaches fail in common classification tasks due to the mixture of the true invariant features and the spurious invariant features. In this paper, we propose a framework based on the conditional entropy minimization principle to filter out the spurious invariant features leading to a new algorithm with a better generalization capability. We theoretically prove that under some particular assumptions, the representation function can precisely recover the true invariant features. In addition, we also show that the proposed approach is closely related to the well-known Information Bottleneck (IB) framework. Both the theoretical and numerical results are provided to justify our approach.
翻译:例如,基于变化的原则方法(IRM)最近已成为有希望的通用域法(DG),尽管理论很有希望,但基于变化的原则方法在共同分类任务中却失败,因为真实的不变特征和虚假的不变化特征混合在一起。在本文中,我们提议了一个基于有条件的最小化原则的框架,以过滤导致产生一种具有更好概括能力的新算法的虚假的不变化特征。我们理论上证明,在某些特定假设下,代表功能可以准确地恢复真实的不变特征。此外,我们还表明,拟议的方法与众所周知的信息博特勒内克(IB)框架密切相关。提供了理论和数字结果,以证明我们的方法是正确的。