Human visual perception can easily generalize to out-of-distributed visual data, which is far beyond the capability of modern machine learning models. Domain generalization (DG) aims to close this gap, with existing DG methods mainly focusing on the loss function design. In this paper, we propose to explore an orthogonal direction, i.e., the design of the backbone architecture. It is motivated by an empirical finding that transformer-based models trained with empirical risk minimization (ERM) outperform CNN-based models employing state-of-the-art (SOTA) DG algorithms on multiple DG datasets. We develop a formal framework to characterize a network's robustness to distribution shifts by studying its architecture's alignment with the correlations in the dataset. This analysis guides us to propose a novel DG model built upon vision transformers, namely Generalizable Mixture-of-Experts (GMoE). Extensive experiments on DomainBed demonstrate that GMoE trained with ERM outperforms SOTA DG baselines by a large margin. Moreover, GMoE is complementary to existing DG methods and its performance is substantially improved when trained with DG algorithms.
翻译:人类视觉感知可以很容易地推广到分布式的视觉数据,这远远超过现代机器学习模型的能力。 域通用(DG)的目的是缩小这一差距,现有的DG方法主要侧重于损失函数设计。 在本文中,我们提议探索一个正方形方向,即主干结构的设计。它的动因是经验性发现,受过实验风险最小化经验(EMM)培训的变压器模型优于以CNN为基础的模型,在多个DG数据集中采用最先进的(SOTA) DG算法。我们开发了一个正式框架,通过研究一个网络的结构与数据集中的相关性,确定一个网络对分布变化的稳健性。本分析指导我们提出一个建立在视觉变异器上的新式的DG模型,即通用Mixture-Explects(GMOE)。关于DMeamB的大规模实验表明,经过机构风险管理培训的GDG值比SODG基准大幅度。此外,GMO是对现有GDG方法进行实质性改进时,GOE是对现有GD方法的补充。