By training a model on multiple observed source domains, domain generalization aims to generalize well to arbitrary unseen target domains without further training. Existing works mainly focus on learning domain-invariant features to improve the generalization ability. However, since target domain is not available during training, previous methods inevitably suffer from overfitting in source domains. To tackle this issue, we develop an effective dropout-based framework to enlarge the region of the model's attention, which can effectively mitigate the overfitting problem. Particularly, different from the typical dropout scheme, which normally conducts the dropout on the fixed layer, first, we randomly select one layer, and then we randomly select its channels to conduct dropout. Besides, we leverage the progressive scheme to add the ratio of the dropout during training, which can gradually boost the difficulty of training model to enhance the robustness of the model. Moreover, to further alleviate the impact of the overfitting issue, we leverage the augmentation schemes on image-level and feature-level to yield a strong baseline model. We conduct extensive experiments on multiple benchmark datasets, which show our method can outperform the state-of-the-art methods.
翻译:通过在多观测源域上培训一个模型,广域化的目的是在不经过进一步培训的情况下,广泛推广任意的无形目标领域; 现有工作主要侧重于学习域变量特征,以提高一般化能力; 然而,由于培训期间没有目标领域,以往的方法不可避免地在源域中过于适应; 为了解决这一问题,我们开发了一个有效的基于辍学的框架,以扩大模型关注的区域,从而有效地减轻过分适应问题。 特别是,与通常在固定层上进行辍学的典型的辍学计划不同,首先,我们随机选择一个层,然后随机选择其渠道进行辍学。 此外,我们利用渐进计划来增加培训期间的辍学比例,这可以逐渐增加培训模式的难度,以加强模型的稳健性。此外,为了进一步减轻过度适应问题的影响,我们利用图像水平和特征层次的强化计划来产生一个强大的基线模型。 我们对多个基准数据集进行了广泛的实验,这表明我们的方法可以超越最新方法。