Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural language in image data, facilitating a broad variety of cross-modal tasks. However, we note that there exists a significant gap between the objective forms of model pre-training and fine-tuning, resulting in a need for large amounts of labeled data to stimulate the visual grounding capability of VL-PTMs for downstream tasks. To address the challenge, we present Cross-modal Prompt Tuning (CPT, alternatively, Colorful Prompt Tuning), a novel paradigm for tuning VL-PTMs, which reformulates visual grounding into a fill-in-the-blank problem with color-based co-referential markers in image and text, maximally mitigating the gap. In this way, CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs. Comprehensive experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin (e.g., 17.3% absolute accuracy improvement, and 73.8% relative standard deviation reduction on average with one shot in RefCOCO evaluation). We make the data and code for this paper publicly available at https://github.com/thunlp/CPT.
翻译:培训前视觉-语言模型(VL-PTMs)在图像数据中定位自然语言、促进多种跨模式任务方面表现出很有希望的能力,然而,我们注意到,模型预培训和微调的客观形式之间存在巨大差距,导致需要大量标签数据,以刺激VL-PTMs在下游任务的视觉定位能力。为了应对这一挑战,我们介绍了跨模式快速提示(CPT, 或彩色即时图解),这是调整VL-PTMs的新颖模式,该模式将图像-PTMs重新定位成在图像和文本中以彩色为基点的共准标的空格问题,以最大程度缩小差距。在这方面,CPT使VL-PMs具有强大的几发甚至零发视地定位能力。全面实验结果表明,快速调整的VL-PTMs(CP)超越了微调对应方的外观(e.g., 17.3% 绝对精确度改进了VL-PMs/PMs)的外观定位基础,73.8%的图像/Reg/Ref 标准偏差,在Simal/requal avial avial preval prass aview.