Recent work attempts to improve semantic segmentation performance by exploring well-designed architectures on a target dataset. However, it remains challenging to build a unified system that simultaneously learns from various datasets due to the inherent distribution shift across different datasets. In this paper, we present a simple, flexible, and general method for semantic segmentation, termed Cross-Dataset Collaborative Learning (CDCL). Given multiple labeled datasets, we aim to improve the generalization and discrimination of feature representations on each dataset. Specifically, we first introduce a family of Dataset-Aware Blocks (DAB) as the fundamental computing units of the network, which help capture homogeneous representations and heterogeneous statistics across different datasets. Second, we propose a Dataset Alternation Training (DAT) mechanism to efficiently facilitate the optimization procedure. We conduct extensive evaluations on four diverse datasets, i.e., Cityscapes, BDD100K, CamVid, and COCO Stuff, with single-dataset and cross-dataset settings. Experimental results demonstrate our method consistently achieves notable improvements over prior single-dataset and cross-dataset training methods without introducing extra FLOPs. Particularly, with the same architecture of PSPNet (ResNet-18), our method outperforms the single-dataset baseline by 5.65\%, 6.57\%, and 5.79\% of mIoU on the validation sets of Cityscapes, BDD100K, CamVid, respectively. Code and models will be released.
翻译:最近的工作试图通过探索目标数据集上设计良好的结构来改进语义分解性表现;然而,由于不同数据集之间的内在分布变化,建立一个同时从各种数据集中学习的统一系统仍然具有挑战性;在本文件中,我们提出了一个简单的、灵活的和一般的语义分解方法,称为跨数据协作学习(CDCL)。鉴于多标签数据集,我们的目标是改进每个数据集的特征表达的概括化和区别性。具体地说,我们首先采用一组数据集-软件区块(DAB)作为网络的基本计算单位,帮助在不同数据集中收集同质的表示和不同统计数据。第二,我们提出一个数据集代谢化培训(DAT)机制,以有效促进优化程序。我们广泛评价四个不同的数据集,即市景、BDD100K、CamVid和COOCO Stuff,同时采用单一数据集和交叉数据设置和交叉数据设置。实验结果显示,我们的方法始终在前一个单一数据-SDRAS-R-RDDA、F-S-S-S-R-ADDA、F-S-S-S-SDDB-S-ADDDDB、F-S-S-S-S-S-S-ADDDDDDB-S-S-S-S-S-S-S-S-SDDDDDDDDD-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-SDDDDDDDDDDDDDDDDDDD-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-