Multi-modal reasoning systems rely on a pre-trained object detector to extract regions of interest from the image. However, this crucial module is typically used as a black box, trained independently of the downstream task and on a fixed vocabulary of objects and attributes. This makes it challenging for such systems to capture the long tail of visual concepts expressed in free form text. In this paper we propose MDETR, an end-to-end modulated detector that detects objects in an image conditioned on a raw text query, like a caption or a question. We use a transformer-based architecture to reason jointly over text and image by fusing the two modalities at an early stage of the model. We pre-train the network on 1.3M text-image pairs, mined from pre-existing multi-modal datasets having explicit alignment between phrases in text and objects in the image. We then fine-tune on several downstream tasks such as phrase grounding, referring expression comprehension and segmentation, achieving state-of-the-art results on popular benchmarks. We also investigate the utility of our model as an object detector on a given label set when fine-tuned in a few-shot setting. We show that our pre-training approach provides a way to handle the long tail of object categories which have very few labelled instances. Our approach can be easily extended for visual question answering, achieving competitive performance on GQA and CLEVR. The code and models are available at https://github.com/ashkamath/mdetr.
翻译:多式推理系统依靠经过预先训练的物体探测器从图像中提取感兴趣的区域。 但是, 这个关键模块通常用作黑盒, 不受下游任务和固定的天体和属性词汇的限制, 并不受下游任务的训练。 这使得这些系统难以捕捉以自由形式文本表达的视觉概念的长尾尾。 在本文中, 我们提议一个端到端的调制探测器, 在一个原始文本查询( 如说明或问题) 的条件下检测物体的图像。 我们使用一个基于变压器的架构, 在模型的早期阶段使用两种模式, 共同解释文本和图像。 我们先用1.3M文本图像配对的网络进行预设, 然后再使用原有的多式数据集, 将文字和图像中对象的短语进行明确的校正。 然后我们微调一些下游任务, 比如写字底线、 表达表达理解和分解, 在流行基准上实现州- 艺术结果。 我们还在模型作为标本的检测器使用两种模式的效用, 在给定的标签上设置了1.3M文本的配对 G-, 进行长期的操作。 我们的标定的直径的路径, 能够进行一些标定的C- 。