Large-scale Vision-Language Models, such as CLIP, learn powerful image-text representations that have found numerous applications, from zero-shot classification to text-to-image generation. Despite that, their capabilities for solving novel discriminative tasks via prompting fall behind those of large language models, such as GPT-3. Here we explore the idea of visual prompt engineering for solving computer vision tasks beyond classification by editing in image space instead of text. In particular, we discover an emergent ability of CLIP, where, by simply drawing a red circle around an object, we can direct the model's attention to that region, while also maintaining global information. We show the power of this simple approach by achieving state-of-the-art in zero-shot referring expressions comprehension and strong performance in keypoint localization tasks. Finally, we draw attention to some potential ethical concerns of large language-vision models.
翻译:大规模视觉语言模型,例如 CLIP,学习到了强大的图像文本表示,并已经被用于许多应用,从零样本分类到文本到图像的生成。尽管如此,它们通过提示解决新颖的判别式任务的能力仍然落后于大型语言模型,例如 GPT-3。本文探讨了通过编辑图像空间而不是文本来解决计算机视觉任务的视觉提示工程的想法。特别地,我们发现 CLIP 的一种新兴能力,在对象周围简单地画一个红色圆圈,我们就可以将模型的注意力引导到该区域,同时仍然保持全局信息。我们通过在零样本指称表达理解和关键点定位任务中实现先进性能,展示了这种简单方法的强大威力。最后,我们引起了一些关于大型语言-视觉模型可能引起的道德问题的注意。