Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable. We study a novel and practical problem of Open Domain Generalization (OpenDG), which learns from different source domains to achieve high performance on an unknown target domain, where the distributions and label sets of each individual source domain and the target domain can be different. The problem can be generally applied to diverse source domains and widely applicable to real-world applications. We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations. We augment domains on both feature-level by a new Dirichlet mixup and label-level by distilled soft-labeling, which complements each domain with missing classes and other domain knowledge. We conduct meta-learning over domains by designing new meta-learning tasks and losses to preserve domain unique knowledge and generalize knowledge across domains simultaneously. Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
翻译:利用可用的数据集来学习在隐蔽域上具有高度一般化能力的模型,对于计算机的视野很重要,特别是在无法提供隐蔽域附加说明的数据时。我们研究了开放域通用化(OpenDG)这个新颖而实际的问题,它从不同的来源域学习,在一个未知的目标域取得高性能,每个源域和目标域的分布和标签组可能不同。问题可以普遍适用于不同的源域,并广泛适用于现实世界应用程序。我们提议了一个Domain-Augment Met-Learing 框架,以学习开放域域通用表示法。我们通过新的diriclet混合和标签级来扩大两个功能级的域域,通过蒸馏软标签来补充每个域的缺失类别和其他域知识。我们通过设计新的元学习任务和损失,在域上进行元学习,以保存域的独特知识,同时将知识普及到各个域。我们对各种多域数据集的实验结果显示,提议的Domain-Augment Met-Lain(DAML)超越了先前的域识别方法。