Semantic segmentation is a crucial image understanding task, where each pixel of image is categorized into a corresponding label. Since the pixel-wise labeling for ground-truth is tedious and labor intensive, in practical applications, many works exploit the synthetic images to train the model for real-word image semantic segmentation, i.e., Synthetic-to-Real Semantic Segmentation (SRSS). However, Deep Convolutional Neural Networks (CNNs) trained on the source synthetic data may not generalize well to the target real-world data. In this work, we propose two simple yet effective texture randomization mechanisms, Global Texture Randomization (GTR) and Local Texture Randomization (LTR), for Domain Generalization based SRSS. GTR is proposed to randomize the texture of source images into diverse unreal texture styles. It aims to alleviate the reliance of the network on texture while promoting the learning of the domain-invariant cues. In addition, we find the texture difference is not always occurred in entire image and may only appear in some local areas. Therefore, we further propose a LTR mechanism to generate diverse local regions for partially stylizing the source images. Finally, we implement a regularization of Consistency between GTR and LTR (CGL) aiming to harmonize the two proposed mechanisms during training. Extensive experiments on five publicly available datasets (i.e., GTA5, SYNTHIA, Cityscapes, BDDS and Mapillary) with various SRSS settings (i.e., GTA5/SYNTHIA to Cityscapes/BDDS/Mapillary) demonstrate that the proposed method is superior to the state-of-the-art methods for domain generalization based SRSS.
翻译:语义分解是一项至关重要的图像理解任务, 在这种任务中, 每个图像像素都分类为相应的标签。 由于地貌图象的像素标签在实际应用中是乏味的, 劳动密集的, 许多工作利用合成图像来训练真实字图像语义分解模式, 即合成到真实语义的分解。 然而, 在源合成数据方面受过训练的深革命神经网络( CNNs) 可能不会与目标真实世界数据相仿。 在这项工作中, 我们建议两个简单而有效的文本随机化机制, 全球质序调( GTR) 和本地质化城市随机化( LTR) 。 GTR 提议将源图像的文本随机化成多种不真实的文本分解( 合成系统) 。 网络对文本的依赖性, 同时用不同域序流流流流流流的信号提示。 此外, 我们发现, 文本差异并非总发生在整个图像中, 并且可能最终在本地图像区域中出现一个常规的 GDRA 。