We unveil a long-standing problem in the prevailing co-saliency detection systems: there is indeed inconsistency between training and testing. Constructing a high-quality co-saliency detection dataset involves time-consuming and labor-intensive pixel-level labeling, which has forced most recent works to rely instead on semantic segmentation or saliency detection datasets for training. However, the lack of proper co-saliency and the absence of multiple foreground objects in these datasets can lead to spurious variations and inherent biases learned by models. To tackle this, we introduce the idea of counterfactual training through context adjustment, and propose a "cost-free" group-cut-paste (GCP) procedure to leverage images from off-the-shelf saliency detection datasets and synthesize new samples. Following GCP, we collect a novel dataset called Context Adjustment Training. The two variants of our dataset, i.e., CAT and CAT+, consist of 16,750 and 33,500 images, respectively. All images are automatically annotated with high-quality masks. As a side-product, object categories, as well as edge information, are also provided to facilitate other related works. Extensive experiments with state-of-the-art models are conducted to demonstrate the superiority of our dataset. We hope that the scale, diversity, and quality of CAT/CAT+ can benefit researchers in this area and beyond. The dataset and benchmark toolkit will be accessible through our project page.
翻译:我们暴露了当前共同认知探测系统中的一个长期问题:培训和测试之间确实存在不一致。建立高质量的共同认知检测数据集需要花费时间和劳力密集型像素级标签,迫使最近的工作依赖语义分解或显著检测数据集进行培训。然而,这些数据集缺乏适当的共性和没有多个前方天体,可能导致虚假的变异和模型所学的内在偏差。为此,我们引入了通过背景调整进行反事实培训的想法,并提出了“免费”群体切口(GCP)程序,以利用现成显著检测数据集的图像和合成新样本。在GCP之后,我们收集了一套叫作“背景调整培训培训”的新数据集。我们的数据集的两个变体,即CAT和CAT+,分别包括16 750 和 33 500 图像。所有图像都自动配有高质量的掩码,并配有“免费”群体切口服(GCP)程序,以“免费”群体切口服(GCP)程序来利用现出现出突出检测数据集数据集的图像,并合成新的样本。我们收集了一个新的数据集,称为“背景调整”培训工具/工具包。我们的数据的边缘区域,我们通过SCAT的优势模型展示了其他的优势模型,可以提供其他的优势数据。