Existing camouflaged object detection (COD) methods rely heavily on large-scale datasets with pixel-wise annotations. However, due to the ambiguous boundary, annotating camouflage objects pixel-wisely is very time-consuming and labor-intensive, taking ~60mins to label one image. In this paper, we propose the first weakly-supervised COD method, using scribble annotations as supervision. To achieve this, we first relabel 4,040 images in existing camouflaged object datasets with scribbles, which takes ~10s to label one image. As scribble annotations only describe the primary structure of objects without details, for the network to learn to localize the boundaries of camouflaged objects, we propose a novel consistency loss composed of two parts: a cross-view loss to attain reliable consistency over different images, and an inside-view loss to maintain consistency inside a single prediction map. Besides, we observe that humans use semantic information to segment regions near the boundaries of camouflaged objects. Hence, we further propose a feature-guided loss, which includes visual features directly extracted from images and semantically significant features captured by the model. Finally, we propose a novel network for COD via scribble learning on structural information and semantic relations. Our network has two novel modules: the local-context contrasted (LCC) module, which mimics visual inhibition to enhance image contrast/sharpness and expand the scribbles into potential camouflaged regions, and the logical semantic relation (LSR) module, which analyzes the semantic relation to determine the regions representing the camouflaged object. Experimental results show that our model outperforms relevant SOTA methods on three COD benchmarks with an average improvement of 11.0% on MAE, 3.2% on S-measure, 2.5% on E-measure, and 4.4% on weighted F-measure.
翻译:已有的伪装对象检测方法(COD) 严重依赖具有像素说明的大型数据组。 但是,由于边界模糊, 标记迷彩对象像素顺理成, 耗时费时费力, 使用 ~ 60 min 给一个图像贴标签 。 在本文中, 我们建议了第一个薄弱的监控 COD 方法, 使用 Scribb 说明作为监督。 为了实现这一点, 我们首先将 4 040 图像用 scribble 标记在现有的伪装对象数据集中重新标注 4 040 个图像, 这需要 ~ 10 标记一个图像。 但是, 由于模糊的边界, 涂鸦说明只描述对象的主要结构没有细节, 网络学习迷彩色对象对象的本结构结构结构, 跨视图损失 不同图像, 使用 缩略略图说明 。 此外, 我们观察到, 人类在靠近迷彩图对象边界的区域使用 语解析信息区域 。 因此, 我们进一步提议了 缩缩图, 包括直接从图像提取的图像和 缩略图中显示 Scaralalal 。