We unveil a long-standing problem in the prevailing co-saliency detection systems: there is indeed inconsistency between training and testing. Constructing a high-quality co-saliency detection dataset involves time-consuming and labor-intensive pixel-level labeling, which has forced most recent works to rely instead on semantic segmentation or saliency detection datasets for training. However, the lack of proper co-saliency and the absence of multiple foreground objects in these datasets can lead to spurious variations and inherent biases learned by models. To tackle this, we introduce the idea of counterfactual training through context adjustment and propose a "cost-free" group-cut-paste (GCP) procedure to leverage off-the-shelf images and synthesize new samples. Following GCP, we collect a novel dataset called Context Adjustment Training (CAT). CAT consists of 33,500 images, which is four times larger than the current co-saliency detection datasets. All samples are automatically annotated with high-quality mask annotations, object categories, and edge maps. Extensive experiments on recent benchmarks are conducted, show that CAT can improve various state-of-the-art models by a large margin (5% ~ 25%). We hope that the scale, diversity, and quality of our dataset can benefit researchers in this area and beyond. Our dataset will be publicly accessible through our project page.
翻译:我们暴露了当前共同识别系统中的长期问题:培训和测试之间确实存在不一致。构建高质量共同识别数据集需要花费时间和劳力密集的像素级标签,迫使最近的工作依赖语义分解或显著检测数据集进行培训。然而,缺乏适当的共识别和这些数据集中缺少多个前方天体可能导致虚假的变化和模型所学到的内在偏差。为此,我们引入了通过背景调整进行反事实培训的理念,并提出了“免费”群体切口(GCP)程序,以利用现成图像和合成新样本。在GCP之后,我们收集了一个叫作“环境调整培训”的新数据集。CAT由33,500张图像组成,这比当前共同识别数据集大四倍。所有样本都自动配有高质量的面具说明、对象类别和边缘地图。在近期基准上进行了广泛的“无成本”集体切口式(GCP)实验,以利用现成图像和合成新样本。在GCP之后,我们收集了一个新的数据集,称为“环境调整培训”(CAT)。CAT由33,500张图像组成,这比当前可获取的数据要大四倍。所有样本都配有高的图、目标类别和边缘地图。我们用大比例的模型可以改进我们的模型中的模型。