In Domain Generalization (DG) tasks, models are trained by using only training data from the source domains to achieve generalization on an unseen target domain, this will suffer from the distribution shift problem. So it's important to learn a classifier to focus on the common representation which can be used to classify on multi-domains, so that this classifier can achieve a high performance on an unseen target domain as well. With the success of cross attention in various cross-modal tasks, we find that cross attention is a powerful mechanism to align the features come from different distributions. So we design a model named CADG (cross attention for domain generalization), wherein cross attention plays a important role, to address distribution shift problem. Such design makes the classifier can be adopted on multi-domains, so the classifier will generalize well on an unseen domain. Experiments show that our proposed method achieves state-of-the-art performance on a variety of domain generalization benchmarks compared with other single model and can even achieve a better performance than some ensemble-based methods.
翻译:在域通用(DG) 任务中,模型仅通过使用来源域的培训数据来培训模型,从而在一个看不见的目标域实现通用化,这将受到分布转移问题的影响。因此,重要的是要学习一个分类师,以共同代表法为重点,该代表法可用于多域分类,这样这个分类师就可以在一个看不见的目标域上取得高绩效。由于在各种跨模式任务中交叉关注的成功,我们发现交叉关注是调和不同分布法特征的强大机制。因此,我们设计了一个名为 CADG(交叉关注域通用化)的模型,其中交叉关注具有重要作用,解决分布转移问题。这种设计使得分类师可以在多域上被采用,因此分类师可以在一个看不见域上被广泛采用。实验显示,与其它单一模式相比,我们拟议的方法在各种域通用基准上取得了最先进的业绩,甚至能够比一些基于组合的方法取得更好的业绩。