We unveil a long-standing problem in the prevailing co-saliency detection systems: there is indeed inconsistency between training and testing. Constructing a high-quality co-saliency detection dataset involves time-consuming and labor-intensive pixel-level labeling, which has forced most recent works to rely instead on semantic segmentation or saliency detection datasets for training. However, the lack of proper co-saliency and the absence of multiple foreground objects in these datasets can lead to spurious variations and inherent biases learned by models. To tackle this, we introduce the idea of counterfactual training through context adjustment, and propose a "cost-free" group-cut-paste (GCP) procedure to leverage images from off-the-shelf saliency detection datasets and synthesize new samples. Following GCP, we collect a novel dataset called Context Adjustment Training (CAT). CAT consists of 33,500 images, making it four times larger than the current co-saliency detection datasets. All images are automatically annotated with high-quality mask annotations, object categories, and edge maps. Extensive experiments with state-of-the-art models are conducted to demonstrate the superiority of our dataset. We hope that the scale, diversity, and quality of our dataset can benefit researchers in this area and beyond. The dataset and benchmark toolkit will be publicly accessible through our project page.
翻译:我们暴露了当前共同识别系统中的长期问题:培训和测试之间确实存在不一致。建立高质量共同识别数据集需要花费时间和劳力密集的像素级标签,迫使最近的工作依赖语义分解或突出检测数据集进行培训。然而,这些数据集缺乏适当的共同识别和没有多个前方天体,可能导致虚假的变化和模型所学到的内在偏差。为此,我们引入了通过背景调整进行反事实培训的理念,并提出了“免费”群体切口(GCP)程序,以利用现成突出检测数据集的图像和合成新样本。在GCP之后,我们收集了一套叫作“背景调整培训”的新数据集。CAT由33,500张图像组成,比当前共同识别数据集大四倍。所有图像都自动配有高质量的面具说明、对象类别和边缘地图,并提出了“免费”群体切口版(GCP)程序,以“免费”群体切口版(GCP)程序来利用现成突出检测数据集中的图像,并合成新的样品。我们在GCP之后,将用我们的数据模型展示我们的数据的优势和优势领域。