Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural language in image data, facilitating a broad variety of cross-modal tasks. However, we note that there exists a significant gap between the objective forms of model pre-training and fine-tuning, resulting in a need for large amounts of labeled data to stimulate the visual grounding capability of VL-PTMs for downstream tasks. To address the challenge, we present Cross-modal Prompt Tuning (CPT, alternatively, Colorful Prompt Tuning), a novel paradigm for tuning VL-PTMs, which reformulates visual grounding into a fill-in-the-blank problem with color-based co-referential markers in image and text, maximally mitigating the gap. In this way, CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs. Comprehensive experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin (e.g., 17.3% absolute accuracy improvement, and 73.8% relative standard deviation reduction on average with one shot in RefCOCO evaluation). All the data and codes will be available to facilitate future research.
翻译:培训前视觉-语言模型(VL-PTMs)在图像数据中定位自然语言、促进多种跨模式任务方面表现出很有希望的能力,然而,我们注意到,模型预培训和微调的客观形式之间存在巨大差距,导致需要大量标签数据,以刺激VL-PTMs在下游任务的视觉定位能力。为了应对这一挑战,我们介绍了跨模式快速提示(CPT, 或彩色即时图解),这是调整VL-PTMs的新颖模式,将视觉基础重新定位成图像和文字中彩色共同参考标记的空白问题,以最大程度缩小差距。在这方面,CPT使VL-PTMs具有强大的几发甚至零发视觉定位能力。全面实验结果显示,迅速调整的VL-PMs(CPT, 或彩色即时速提示图案)超越了微调对应方(e.g., 17.3% 绝对精确度改进了VL-PTMs,73.8%的相对标准差将促进未来平均数据下降。