We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose Meta-Learning based Adversarial Domain Augmentation to solve this Out-of-Domain generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder to relax the widely used worst-case constraint. We further improve our method by integrating uncertainty quantification for efficient domain generalization. Extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.
翻译:我们担心的是,模型一般化模式中最坏的情景是,模型旨在在许多无形领域取得良好效果,而只有一个领域可供培训使用。我们提议采用基于Meta-Learning的自动域域增强,以解决这个外域通用化问题。关键的想法是利用对抗性培训来创造“虚假”但“挑战”的人口,从中可以学习理论保证的通用模式。为了便利快速和可取的域扩展,我们用元学习方案进行模型培训,并使用瓦塞斯坦自动-Encoder来放松广泛使用的最坏情况限制。我们进一步改进了我们的方法,将不确定性量化纳入高效域通用化。多基准数据集的广泛实验表明,它在处理单一域通用化方面表现优异。