We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single samples at training time so as to further adapt itself to each single test sample at test time. We formulate the adaptation to the single test sample as a variational Bayesian inference problem, which incorporates the test sample as a conditional into the generation of model parameters. The adaptation to each test sample requires only one feed-forward computation at test time without any fine-tuning or self-supervised training on additional data from the unseen domains. Extensive ablation studies demonstrate that our model learns the ability to adapt models to each single sample by mimicking domain shifts during training. Further, our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
翻译:我们努力从一组源域中学习一个模型,该模型向看不见的目标域进行广泛概括。在这样一个域的概括性假设中,主要的挑战在于培训期间没有任何目标域数据,导致没有将所学的模型明确应用于无形的目标域。我们建议学习在单一测试样品中进行跨域的普及。我们利用一个元学习模式,学习在培训时使用单一样本进行适应的能力,以便在测试时进一步适应每个单一测试样品。我们将单一测试样品的适应作为变异性巴耶西亚推断问题,将测试样品作为生成模型参数的一个条件。对每个测试样品的适应只需要在测试时间进行一次进取前的计算,而无需对来自未知域的额外数据进行任何微调或自我监督的培训。广泛的反动研究表明,我们的模型通过培训中模拟域变换来学习使模型适应每个单一样本的能力。此外,我们的模型至少实现了可比较性 -- -- 并且往往比在多个域通用基准上所用的状态方法要好。