Domain generalization in semantic segmentation aims to alleviate the performance degradation on unseen domains through learning domain-invariant features. Existing methods diversify images in the source domain by adding complex or even abnormal textures to reduce the sensitivity to domain specific features. However, these approaches depend heavily on the richness of the texture bank, and training them can be time-consuming. In contrast to importing textures arbitrarily or augmenting styles randomly, we focus on the single source domain itself to achieve generalization. In this paper, we present a novel adaptive texture filtering mechanism to suppress the influence of texture without using augmentation, thus eliminating the interference of domain-specific features. Further, we design a hierarchical guidance generalization network equipped with structure-guided enhancement modules, which purpose is to learn the domain-invariant generalized knowledge. Extensive experiments together with ablation studies on widely-used datasets are conducted to verify the effectiveness of the proposed model, and reveal its superiority over other state-of-the-art alternatives.
翻译:现有方法使源域图象多样化,增加复杂甚至异常的纹理,以减少对具体领域特征的敏感度;然而,这些方法在很大程度上依赖质谱库的丰富性,培训它们可能耗费时间。与任意输入质谱或随机增加样式相比,我们侧重于单一源域本身,以便实现普遍性。在本文件中,我们提出了一个新的适应性图理过滤机制,以抑制质谱的影响,而不使用增强,从而消除特定领域特征的干扰。此外,我们设计了一个等级指导一般化网络,配有结构指导强化模块,目的是学习广泛的域变量知识。进行了广泛的实验,同时对广泛使用的数据集进行了升级研究,以核实拟议模型的有效性,并揭示其优于其他最先进的替代方法。</s>