Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics. Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class. However, real-world training data collected with different composition biases often exhibits severe distribution gaps for domain and class, leading to substantial performance degradation. In this paper, we propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data. The self-balanced scheme is based on an auxiliary reweighting network that iteratively updates the weight of loss conditioned on the domain and class information by leveraging balanced meta data. Experimental results demonstrate the effectiveness of our method overwhelming state-of-the-art works for domain generalization.
翻译:广域化的目的是学习多域源数据的预测模型,使该模型能够以未知的统计资料概括到目标领域,大多数现有办法是在以下假设下制定的:源数据在域和类上均匀;然而,以不同构成偏差收集的现实世界培训数据往往显示在域和类上存在严重的分布差距,导致显著性能退化。在本文件中,我们提出了一个自我平衡的域通用框架,以适应性的方式学习损失的权重,减轻多域源数据不同分布造成的偏差。自我平衡方案的基础是辅助重加权网络,利用平衡的元数据,迭接更新域和类信息中损失的权重。实验结果表明我们方法在域普遍化方面压倒一切的先进工作的有效性。