Object detection with transformers (DETR) reaches competitive performance with Faster R-CNN via a transformer encoder-decoder architecture. Inspired by the great success of pre-training transformers in natural language processing, we propose a pretext task named random query patch detection to unsupervisedly pre-train DETR (UP-DETR) for object detection. Specifically, we randomly crop patches from the given image and then feed them as queries to the decoder. The model is pre-trained to detect these query patches from the original image. During the pre-training, we address two critical issues: multi-task learning and multi-query localization. (1) To trade-off multi-task learning of classification and localization in the pretext task, we freeze the CNN backbone and propose a patch feature reconstruction branch which is jointly optimized with patch detection. (2) To perform multi-query localization, we introduce UP-DETR from single-query patch and extend it to multi-query patches with object query shuffle and attention mask. In our experiments, UP-DETR significantly boosts the performance of DETR with faster convergence and higher precision on PASCAL VOC and COCO datasets. The code will be available soon.
翻译:以变压器探测变压器(DETR)为对象进行变压器天体探测(DETR),通过变压器编码解码器结构,使R-CNN更快地具有竞争性性能。在自然语言处理培训前变压器的伟大成功激励下,我们提出了一个名为随机查询补丁的托辞任务,以不受监督地探测变压器前变压器(UP-DETR),具体地说,我们随机地从给定图像中提取补丁,然后作为解码器查询。模型经过预先培训,以探测原始图像中的这些查询补丁。在培训前,我们处理两个关键问题:多任务学习和多任务本地化。 (1) 将分类和本地化的多任务换代用多任务学习,我们冻结CNN的骨干,并提议一个补丁重建分支部分,通过补丁探测来共同优化。(2) 为了执行多任务定位,我们从单项补接机开始采用UP-DETR,并将它扩大到有对象查询和注意面罩的多拼合。在我们实验中,UP-DETR将快速推进DTR数据的性、快速同步。