Deep learning methods can struggle to handle domain shifts not seen in training data, which can cause them to not generalize well to unseen domains. This has led to research attention on domain generalization (DG), which aims to the model's generalization ability to out-of-distribution. Adversarial domain generalization is a popular approach to DG, but conventional approaches (1) struggle to sufficiently align features so that local neighborhoods are mixed across domains; and (2) can suffer from feature space over collapse which can threaten generalization performance. To address these limitations, we propose localized adversarial domain generalization with space compactness maintenance~(LADG) which constitutes two major contributions. First, we propose an adversarial localized classifier as the domain discriminator, along with a principled primary branch. This constructs a min-max game whereby the aim of the featurizer is to produce locally mixed domains. Second, we propose to use a coding-rate loss to alleviate feature space over collapse. We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach, where LADG outperforms leading competitors on most datasets.
翻译:深层次的学习方法可以努力处理在培训数据中看不到的域变,这可能导致它们不能向看不见域变现。这导致了对域一般化(DG)的研究关注。 通用化(DG)旨在让模型的通用能力超出分布范围。 平面性域变现是通用的流行方法,但常规方法 (1) 努力充分统一功能,使地方相邻地区在不同的域间混杂; (2) 可能因地貌崩溃而受害,从而可能威胁一般化的性能。 为了解决这些限制,我们建议采用局部对抗性域变现,与空间缩压缩维护~(LADG)相结合,这构成了两大贡献。 首先,我们建议采用对抗性本地化分类方法,作为域区分器,并加上一个有原则的主要分支。 这构建了一个微量轴游戏,使Feturizer的目的是生成本地混合域。 其次,我们提议使用调率损失来减轻特征空间的崩溃。 我们对Wilds DG基准进行全面实验,以验证我们的方法, 即LADG在大多数数据集中优于主要竞争者。