Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions. This paper considers a more realistic yet more challenging scenario,namely Single Domain Generalization (Single-DG), where only a single source domain is available for training. To tackle this challenge, we first try to understand when neural networks fail to generalize? We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity". Based on our analysis, we propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies. Models trained with these hard-to-learn samples can effectively suppress the sensitivity in the frequency space, which leads to improved generalization performance. Extensive experiments on multiple public datasets demonstrate the superiority of our approach, which surpasses the state-of-the-art single-DG methods.
翻译:广域化(DG) 旨在培养一种模型,以便在不同分布区下的无形领域很好地运行。本文件认为一种更现实、更具有挑战性的情景,即单一域化(Sing-DG),只有单一源域化(Sing-DG),可供培训使用。为了应对这一挑战,我们首先试图了解神经网络未能普及时的情况?我们从经验上确定一种模型的属性,这种模型与我们作为“模范敏感度”而创造的一般化非常相关。根据我们的分析,我们提出了“光谱自动数据增强(SADA)”的新战略,以生成针对高度敏感频率的强化图像。用这些难读的样本培训的模型可以有效地抑制频率空间的敏感度,从而导致改进一般化性表现。关于多个公共数据集的广泛实验显示了我们方法的优越性,超过了最先进的单一DG方法。