To generalize the model trained in source domains to unseen target domains, domain generalization (DG) has recently attracted lots of attention. Since target domains can not be involved in training, overfitting source domains is inevitable. As a popular regularization technique, the meta-learning training scheme has shown its ability to resist overfitting. However, in the training stage, current meta-learning-based methods utilize only one task along a single optimization trajectory, which might produce a biased and noisy optimization direction. Beyond the training stage, overfitting could also cause unstable prediction in the test stage. In this paper, we propose a novel multi-view DG framework to effectively reduce the overfitting in both the training and test stage. Specifically, in the training stage, we develop a multi-view regularized meta-learning algorithm that employs multiple optimization trajectories to produce a suitable optimization direction for model updating. We also theoretically show that the generalization bound could be reduced by increasing the number of tasks in each trajectory. In the test stage, we utilize multiple augmented images to yield a multi-view prediction to alleviate unstable prediction, which significantly promotes model reliability. Extensive experiments on three benchmark datasets validate that our method can find a flat minimum to enhance generalization and outperform several state-of-the-art approaches.
翻译:将在源域中培训的模型推广到看不见的目标域,域通用(DG)最近引起了许多注意。由于目标域无法参与培训,因此过度配置源域是不可避免的。作为流行的正规化技术,元学习培训计划表明它有能力抵制过度配置。然而,在培训阶段,目前的基于元学习的方法只利用单一优化轨迹的一个任务,这可能会产生偏差和噪音的优化方向。在培训阶段之外,过度配置还可能造成测试阶段的不稳定预测。在本文中,我们提出了一个新的多视角DG框架,以有效减少培训和测试阶段的过度配置。具体地说,在培训阶段,我们开发了多视角正规化的元学习算法,采用多种优化轨迹,为模型更新提供适当的优化方向。我们还从理论上表明,通过增加每个轨迹中的任务数量,可以减少通用的链接。在试验阶段,我们使用多个增强图像来产生多视角的预测,以缓解不稳定的预测,从而极大地促进模型可靠性。具体地说,在培训阶段,我们开发了一种多视角的正规化的元算法,并验证了一种固定式方法。