Referring image segmentation aims to segment a referent via a natural linguistic expression.Due to the distinct data properties between text and image, it is challenging for a network to well align text and pixel-level features. Existing approaches use pretrained models to facilitate learning, yet separately transfer the language/vision knowledge from pretrained models, ignoring the multi-modal corresponding information. Inspired by the recent advance in Contrastive Language-Image Pretraining (CLIP), in this paper, we propose an end-to-end CLIP-Driven Referring Image Segmentation framework (CRIS). To transfer the multi-modal knowledge effectively, CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment. More specifically, we design a vision-language decoder to propagate fine-grained semantic information from textual representations to each pixel-level activation, which promotes consistency between the two modalities. In addition, we present text-to-pixel contrastive learning to explicitly enforce the text feature similar to the related pixel-level features and dissimilar to the irrelevances. The experimental results on three benchmark datasets demonstrate that our proposed framework significantly outperforms the state-of-the-art performance without any post-processing. The code will be released.
翻译:参考图像分解的目的是通过自然语言表达式分割引用。 对于文本和图像之间的不同数据属性而言,对于网络来说,它是一个挑战性的问题,以便有效地调和文本和像素层面的特征。现有的方法使用预先培训的模式来便利学习,但将语言/视觉知识从预先培训的模型中分离出来,而忽略多式对应信息。由于最近对立语言图像前导法的进展,我们在本文件中提议了一个端到端的 CLIP-Drive 图像分解框架。要有效地传输多式知识,CRIS 采用视觉语言解密和对比学习,以实现文本到像素的对齐。更具体地说,我们设计了一个视觉语言解码器,将精细的语义信息从文字表达式到每个像素素级的激活,这促进了两种模式的一致性。此外,我们提出文本到像素分解的对比学习,以明确执行类似于相关像像像像像像像像像像像像像像像一样的图像级的文本特征, CRIS 采用视觉解码和对比的对比学习, 将大大地展示我们所提出的三级后期的状态。