Domain Generalization (DG) is essentially a sub-branch of out-of-distribution generalization, which trains models from multiple source domains and generalizes to unseen target domains. Recently, some domain generalization algorithms have emerged, but most of them were designed with non-transferable complex architecture. Additionally, contrastive learning has become a promising solution for simplicity and efficiency in DG. However, existing contrastive learning neglected domain shifts that caused severe model confusions. In this paper, we propose a Dual-Contrastive Learning (DCL) module on feature and prototype contrast. Moreover, we design a novel Causal Fusion Attention (CFA) module to fuse diverse views of a single image to attain prototype. Furthermore, we introduce a Similarity-based Hard-pair Mining (SHM) strategy to leverage information on diversity shift. Extensive experiments show that our method outperforms state-of-the-art algorithms on three DG datasets. The proposed algorithm can also serve as a plug-and-play module without usage of domain labels.
翻译:一般化(DG)基本上是一个分门别类的分散化通用(DG),它从多个源域领域培训模型,并将模型推广到看不见的目标领域。最近,出现了一些域通用算法,但其中多数是用不可转让的复杂结构设计的。此外,对比式学习已成为DG简单化和效率的一个大有希望的解决办法。然而,现有的对比式学习被忽视的域变换造成了严重模型混乱。在本文中,我们提议了一个关于特性和原型对比的双轨学习模块。此外,我们设计了一个新颖的 Causal Fusion 注意(CFA) 模块,将单一图像的不同观点融合到原型。此外,我们引入了基于类似性的硬皮采矿(SHM) 战略, 以利用多样性变化的信息。广泛的实验表明,我们的方法在三个DG数据集上优于最新水平的算法。提议的算法也可以作为一个插件模块,而不使用域标。