Domain generalization is the task of learning models that generalize to unseen target domains. We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED), that learns domain-invariant features while encouraging the model to converge to flat minima, which recently turned out to be a sufficient condition for domain generalization. To this end, our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble. Also, we present a de-stylization technique that standardizes features to encourage the model to produce style-consistent predictions even in an arbitrary target domain. Our method greatly improves generalization capability in public benchmarks for cross-domain image classification, cross-dataset person re-ID, and cross-dataset semantic segmentation. Moreover, we show that models learned by our method are robust against adversarial attacks and image corruptions.
翻译:广域化是学习模式的任务,这些模式一般化为看不见的目标域。我们提出了一个简单而有效的广域化方法,名为跨域共振蒸馏法(XDED),该方法可以学习域变量,同时鼓励模型向平面迷你,这最近被证明是域化的足够条件。为此,我们的方法从同一标签的培训数据中生成了产出日志的组合,但从不同的域中生成,然后惩罚每个产出与共同点不匹配。此外,我们提出了一种脱星化技术,将各种特征标准化,鼓励模型即使在任意的目标域内也生成风格一致的预测。我们的方法大大改进了交叉图像分类、交叉数据配置人重置和交叉数据集精度分割的公共基准的一般化能力。此外,我们展示了我们所学的模型对对抗性攻击和图像腐败的强大程度。