While class activation map (CAM) generated by image classification network has been widely used for weakly supervised object localization (WSOL) and semantic segmentation (WSSS), such classifiers usually focus on discriminative object regions. In this paper, we propose Contrastive learning for Class-agnostic Activation Map (C$^2$AM) generation only using unlabeled image data, without the involvement of image-level supervision. The core idea comes from the observation that i) semantic information of foreground objects usually differs from their backgrounds; ii) foreground objects with similar appearance or background with similar color/texture have similar representations in the feature space. We form the positive and negative pairs based on the above relations and force the network to disentangle foreground and background with a class-agnostic activation map using a novel contrastive loss. As the network is guided to discriminate cross-image foreground-background, the class-agnostic activation maps learned by our approach generate more complete object regions. We successfully extracted from C$^2$AM class-agnostic object bounding boxes for object localization and background cues to refine CAM generated by classification network for semantic segmentation. Extensive experiments on CUB-200-2011, ImageNet-1K, and PASCAL VOC2012 datasets show that both WSOL and WSSS can benefit from the proposed C$^2$AM.
翻译:虽然通过图像分类网络生成的类别激活图(CAM)被广泛用于监督不力的物体定位和语义部分(WSSSS),但这类分类器通常以受歧视对象区域为重点。在本文中,我们建议只使用未贴标签的图像数据,而没有图像层面的监督,来生成类启动图(C$2$AM),而没有标签的图像数据。核心思想来自观察,即:i) 浅地物体的语义信息通常与背景不同;ii) 具有类似外观或背景且有类似颜色/文字的表面物体在功能空间中也有相似的表示。我们根据上述关系形成正对正对对和负对对对对,并迫使网络使用类合成激活图(C$2$2$AM)生成反向地和背景的反向学习(CSSS-20美元类-200美元)的激活图示图示图示,通过对目标的本地化和图像部分进行升级的CAMA-SAAAAAAAAAAAA分析,并展示其背景和图象分析。