Inspired by the success of visual-language methods (VLMs) in zero-shot classification, recent works attempt to extend this line of work into object detection by leveraging the localization ability of pre-trained VLMs and generating pseudo labels for unseen classes in a self-training manner. However, since the current VLMs are usually pre-trained with aligning sentence embedding with global image embedding, the direct use of them lacks fine-grained alignment for object instances, which is the core of detection. In this paper, we propose a simple but effective Pretrain-adaPt-Pseudo labeling paradigm for Open-Vocabulary Detection (P$^3$OVD) that introduces a fine-grained visual-text prompt adapting stage to enhance the current self-training paradigm with a more powerful fine-grained alignment. During the adapting stage, we enable VLM to obtain fine-grained alignment by using learnable text prompts to resolve an auxiliary dense pixel-wise prediction task. Furthermore, we propose a visual prompt module to provide the prior task information (i.e., the categories need to be predicted) for the vision branch to better adapt the pretrained VLM to the downstream tasks. Experiments show that our method achieves the state-of-the-art performance for open-vocabulary object detection, e.g., 31.5% mAP on unseen classes of COCO.
翻译:由于视觉语言方法(VLM)在零光分类方面的成功,最近的工作试图通过利用预先培训VLM的本地化能力和以自我培训的方式为隐形班级制作假标签,将这一工作线扩展为对象检测。然而,由于目前的VLM通常经过预先训练,将判决与全球图像嵌入为一体,直接使用这些语言的方法缺乏精确的精确校正,这是探测的核心。在本文件中,我们提出了一个简单而有效的开放甚小语言探测(P$3$OVD)标签模式,以引入精细的视觉文字快速调整阶段,以加强当前的自我培训模式,使其与全球图像嵌入相嵌入为一体,因此,直接使用它们缺乏精确的精确校正对,这是探测核心。此外,我们提出了一个视觉快速化模块,以提供前任务(i.A.A.5$OV.