Domain Generalization (DG) is essentially a sub-branch of out-of-distribution generalization, which trains models from multiple source domains and generalizes to unseen target domains. Recently, some domain generalization algorithms have emerged, but most of them were designed with non-transferable complex architecture. Additionally, contrastive learning has become a promising solution for simplicity and efficiency in DG. However, existing contrastive learning neglected domain shifts that caused severe model confusions. In this paper, we propose a Dual-Contrastive Learning (DCL) module on feature and prototype contrast. Moreover, we design a novel Causal Fusion Attention (CFA) module to fuse diverse views of a single image to attain prototype. Furthermore, we introduce a Similarity-based Hard-pair Mining (SHM) strategy to leverage information on diversity shift. Extensive experiments show that our method outperforms state-of-the-art algorithms on three DG datasets. The proposed algorithm can also serve as a plug-and-play module without usage of domain labels.
翻译:域泛化(DG)基本上是一种超出分布泛化的子分支,它从多个源域训练模型,并推广到看不见的目标域。最近,出现了一些域泛化算法,但大多数都是设计的不可转移的复杂架构。此外,对比学习已经成为域泛化中简单性和效率的有前途的解决方案。然而,现有的对比学习忽略了造成在模型混淆的领域偏移。在本文中,我们提出了一个基于特征和原型对比的双重对比学习(DCL)模块。此外,我们设计了一种新的Causal Fusion Attention(CFA)模块,以获得原型的单个图像的多个视图,从而融合不同的视图。此外,我们引入了一种基于相似度的硬对挖掘(SHM)策略,以利用多样性转移的信息。大量实验证明,我们的方法在三个DG数据集上优于现有的算法。所提出的算法也可以作为插入式模块,而不需要使用域标签。