Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains. In this research, we consider the scenario where different domain shifts occur among conditional distributions of different classes across domains. When labeled samples in the source domains are limited, existing approaches are not sufficiently robust. To address this problem, we propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG), inspired by the concept of distributionally robust optimization. We encourage robustness over conditional distributions within class-specific Wasserstein uncertainty sets and optimize the worst-case performance of a classifier over these uncertainty sets. We further develop a test-time adaptation module leveraging optimal transport to quantify the relationship between the unseen target domain and source domains to make adaptive inference for target data. Experiments on the Rotated MNIST, PACS and the VLCS datasets demonstrate that our method could effectively balance the robustness and discriminability in challenging generalization scenarios.
翻译:广域化的目的是学习一个在不可见的目标域上运行良好的通用模型,该模型将来自多个来源域的知识纳入其中。 在这项研究中,我们考虑了不同领域在不同类别有条件分布之间发生不同域变的设想。当源域的标签样本有限时,现有方法不够健全。为解决这一问题,我们提议了一个创新的域化通用框架,名为瓦瑟斯坦分布强强力优化概念所启发的瓦瑟斯坦分布强力优化概念。我们鼓励对特定类别瓦塞斯坦不确定数据集中有条件分布的稳健性,并优化分类员在这些不确定性组中最坏的性能。我们进一步开发了一个测试-时间适应模块,利用最佳运输手段量化无形目标域和源域之间的关系,以便对目标数据作出适应性推论。关于旋转的MNIST、PACS和VLCS数据集的实验表明,我们的方法可以有效地平衡挑战性通用假设情景的稳健性和可度。