We propose the idea of transferring common-sense knowledge from source categories to target categories for scalable object detection. In our setting, the training data for the source categories have bounding box annotations, while those for the target categories only have image-level annotations. Current state-of-the-art approaches focus on image-level visual or semantic similarity to adapt a detector trained on the source categories to the new target categories. In contrast, our key idea is to (i) use similarity not at image-level, but rather at region-level, as well as (ii) leverage richer common-sense (based on attribute, spatial, etc.,) to guide the algorithm towards learning the correct detections. We acquire such common-sense cues automatically from readily-available knowledge bases without any extra human effort. On the challenging MS COCO dataset, we find that using common-sense knowledge substantially improves detection performance over existing transfer-learning baselines.
翻译:我们提出将源类别中的常识知识转换为可缩放物体探测目标类别的想法。 在我们的设置中,源类别的培训数据带有约束性框说明,而目标类别中的培训数据只有图像级说明。目前最先进的方法侧重于图像级视觉或语义相似性,以使受过源类别培训的探测器适应新的目标类别。相反,我们的关键想法是:(一) 使用不图像级的类似性,而是区域级的类似性,以及(二) 利用(基于属性、空间等的)较丰富的常识来指导算法学习正确的检测。我们从随时可得的知识库中自动获得这种常识提示,而无需任何额外的人力努力。关于挑战MS COCO数据集,我们发现,使用常识可大大改进现有转移学习基线的探测性能。