Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments. Despite a proliferation of proposal algorithms for this task, assessing their performance both theoretically and empirically is still very challenging. Distributional matching algorithms such as (Conditional) Domain Adversarial Networks [Ganin et al., 2016, Long et al., 2018] are popular and enjoy empirical success, but they lack formal guarantees. Other approaches such as Invariant Risk Minimization (IRM) require a prohibitively large number of training environments -- linear in the dimension of the spurious feature space $d_s$ -- even on simple data models like the one proposed by [Rosenfeld et al., 2021]. Under a variant of this model, we show that both ERM and IRM cannot generalize with $o(d_s)$ environments. We then present an iterative feature matching algorithm that is guaranteed with high probability to yield a predictor that generalizes after seeing only $O(\log d_s)$ environments. Our results provide the first theoretical justification for a family of distribution-matching algorithms widely used in practice under a concrete nontrivial data model.
翻译:广域化的目的是利用来自有限培训环境的数据,在看不见的测试环境中很好地发挥作用。尽管对这项任务的建议算法激增,但从理论上和经验上评估其绩效仍然非常困难。分布匹配算法,例如(有条件的)Domain Aversarial Networks [Ganin等人,2016年,Long等人,2018年]是受欢迎的,并享有经验上的成功,但缺乏正式的保证。其他方法,例如 " 减少不易变风险 " (IRM)需要大量培训环境 -- -- 虚假地物空间层面的线性值$d_s -- -- 即使是在像[Rosenfeld等人,20211年]提出的简单数据模型上也是如此。在这种模型的变式下,我们显示机构风险管理和IRM都无法与$(d_s)环境一概观。然后,我们提出了一个迭代相匹配算法,保证极有可能产生预测值,在只看到$O(log d_s)美元的环境之后,一般化。我们的结果提供了第一个理论上的理由,说明在具体数据下广泛使用的分布配法式算。