Domain generalization (DG) aims at learning generalizable models under distribution shifts to avoid redundantly overfitting massive training data. Previous works with complex loss design and gradient constraint have not yet led to empirical success on large-scale benchmarks. In this work, we reveal the mixture-of-experts (MoE) model's generalizability on DG by leveraging to distributively handle multiple aspects of the predictive features across domains. To this end, we propose Sparse Fusion Mixture-of-Experts (SF-MoE), which incorporates sparsity and fusion mechanisms into the MoE framework to keep the model both sparse and predictive. SF-MoE has two dedicated modules: 1) sparse block and 2) fusion block, which disentangle and aggregate the diverse learned signals of an object, respectively. Extensive experiments demonstrate that SF-MoE is a domain-generalizable learner on large-scale benchmarks. It outperforms state-of-the-art counterparts by more than 2% across 5 large-scale DG datasets (e.g., DomainNet), with the same or even lower computational costs. We further reveal the internal mechanism of SF-MoE from distributed representation perspective (e.g., visual attributes). We hope this framework could facilitate future research to push generalizable object recognition to the real world. Code and models are released at https://github.com/Luodian/SF-MoE-DG.
翻译:域通用 (DG) 旨在学习分布式转换的通用模型,以避免过分过度配置大规模培训数据。 以往复杂的损失设计和梯度限制工程尚未导致大规模基准的经验成功。 在这项工作中,我们披露专家混合模型(MOE)对DG的通用性,通过杠杆分散处理跨域预测特征的多个方面。 为此,我们提议Sparse Fusion Mixture-Expert (SF-MoE), 将广度和聚合机制纳入MOE 框架框架, 以保持模型的稀少和预测性。 SF-MoE 有两个专用模块:1) 稀疏块和2) 聚变块, 分别拆散和汇总一个对象的不同学习信号。 广泛的实验表明SF-MoE 是大规模基准的可分布域化学习者。 它在5个大型的DG目标集(e.g.,DomainNet)中将广度模型和集成超过2%的组合机制, 将我们Outo-L- developal- demal exal researalational ress resignal resm resignal resmissional resmissional resmissional resmation to theweabal resmational resmation resmational resmelview