Current object detectors are limited in vocabulary size due to the small scale of detection datasets. Image classifiers, on the other hand, reason about much larger vocabularies, as their datasets are larger and easier to collect. We propose Detic, which simply trains the classifiers of a detector on image classification data and thus expands the vocabulary of detectors to tens of thousands of concepts. Unlike prior work, Detic does not assign image labels to boxes based on model predictions, making it much easier to implement and compatible with a range of detection architectures and backbones. Our results show that Detic yields excellent detectors even for classes without box annotations. It outperforms prior work on both open-vocabulary and long-tail detection benchmarks. Detic provides a gain of 2.4 mAP for all classes and 8.3 mAP for novel classes on the open-vocabulary LVIS benchmark. On the standard LVIS benchmark, Detic reaches 41.7 mAP for all classes and 41.7 mAP for rare classes. For the first time, we train a detector with all the twenty-one-thousand classes of the ImageNet dataset and show that it generalizes to new datasets without fine-tuning. Code is available at https://github.com/facebookresearch/Detic.
翻译:由于探测数据集规模小,当前天体探测器的词汇范围有限。 图像分类师另一方面, 由于数据集规模更大, 收集起来更容易, 其原因也大得多。 我们提议“ 测试”, 仅对探测器分类员进行图像分类数据培训, 从而将探测器的词汇范围扩大到数万个概念。 与先前的工作不同, Detic 不给基于模型预测的盒子分配图像标签, 使得执行和兼容一系列探测架构和主干网更加容易。 我们的结果表明, 即使在没有框注解的班级, 也会产生出色的探测器。 它比以前关于开放词汇和长尾探测基准的工作要好得多。 测试为所有班级提供了2.4 mAP的收益, 为开放语言LVIS基准的新班提供了8.3 mAP的收益。 在标准的LVIS基准上, 所有班级的Detict达到41.7 mAP, 稀有等级的为41.7 mAP。 我们第一次用所有21-Thoousand 类的探测器进行了培训。 它比以往所有21- Thousand bal 和长的图像网络数据库数据/ 正在显示, 通用数据库/ agestregetalgetalgetal data 。