We study multi-modal few-shot object detection (FSOD) in this paper, using both few-shot visual examples and class semantic information for detection, which are complementary to each other by definition. Most of the previous works on multi-modal FSOD are fine-tuning-based which are inefficient for online applications. Moreover, these methods usually require expertise like class names to extract class semantic embedding, which are hard to get for rare classes. Our approach is motivated by the high-level conceptual similarity of (metric-based) meta-learning and prompt-based learning to learn generalizable few-shot and zero-shot object detection models respectively without fine-tuning. Specifically, we combine the few-shot visual classifier and text classifier learned via meta-learning and prompt-based learning respectively to build the multi-modal classifier and detection models. In addition, to fully exploit the pre-trained language models, we propose meta-learning-based cross-modal prompting to generate soft prompts for novel classes present in few-shot visual examples, which are then used to learn the text classifier. Knowledge distillation is introduced to learn the soft prompt generator without using human prior knowledge of class names, which may not be available for rare classes. Our insight is that the few-shot support images naturally include related context information and semantics of the class. We comprehensively evaluate the proposed multi-modal FSOD models on multiple few-shot object detection benchmarks, achieving promising results.
翻译:本文研究多模态小样本物体检测(FSOD),使用少量视觉示例和分类语义信息进行检测,在定义上两者互补。以往大多数有关多模态FSOD的工作都是基于微调的,对于在线应用而言效率低下。此外,这些方法通常需要专业知识,例如课程名称用于提取分类语义嵌入,这对于罕见课程来说很难得到。我们的方法受到了元学习和基于提示学习的高级概念相似性的启发,分别使用(度量-基于)元学习和基于提示学习来学习可推广的小样本和零样本物体检测模型,而无需微调。具体来说,我们结合通过元学习和基于提示学习学习的少量视觉分类器和文本分类器来建立多模态分类器和检测模型。此外,为了充分利用预训练的语言模型,我们提出了基于元学习的跨模态提示,为少量视觉示例中存在的新类别生成软提示,然后将其用于学习文本分类器。引入知识蒸馏来学习软提示生成器,而不使用人类先验知识的类名称,这对于罕见课程可能不可用。我们的观点是,少量支持图像自然包含相关环境信息和类别的语义。我们全面评估了所提出的多模态FSOD模型,在多个小样本物体检测基准上取得了良好的结果。