Domain generalization (DG) tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains. A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization. However, these claims are understood intuitively, rather than mathematically. Our explorations empirically reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation. This work therefore aim to guarantee and further enhance the validity of this strand. To this end, we propose a new perspective on DG that recasts it as a convex game between domains. We first encourage each diversified domain to enhance model generalization by elaborately designing a regularization term based on supermodularity. Meanwhile, a sample filter is constructed to eliminate low-quality samples, thereby avoiding the impact of potentially harmful information. Our framework presents a new avenue for the formal analysis of DG, heuristic analysis and extensive experiments demonstrate the rationality and effectiveness.
翻译:域泛化 (DG) 倾向于通过学习具有多个源域的模型来缓解深度神经网络的泛化错误,DG 的经典解决方法是域增强,其共同信念是多样化源域将有助于分布之外的泛化。然而,这些观点只是直觉理解,而不是数学理解。我们的研究结果表明,模型泛化能力与域的多样性之间的相关性可能不是完全正相关的,这会限制域增强的有效性。因此,本文旨在保证并进一步增强这个方案的有效性。为此,我们提出一种新的域泛化思路,将其重新定义为域之间的协同游戏。我们首先通过超模性设计一个正则化项,鼓励每个多样化的域提高模型泛化能力。同时,构建一个样本过滤器,以消除低质量样本的影响,避免可能有害信息的影响。我们的框架为 DG 的形式化分析提供了新的途径,启发式分析和广泛的实验表明了其合理性和有效性。