Scene text retrieval aims to localize and search all text instances from an image gallery, which are the same or similar to a given query text. Such a task is usually realized by matching a query text to the recognized words, outputted by an end-to-end scene text spotter. In this paper, we address this problem by directly learning a cross-modal similarity between a query text and each text instance from natural images. Specifically, we establish an end-to-end trainable network, jointly optimizing the procedures of scene text detection and cross-modal similarity learning. In this way, scene text retrieval can be simply performed by ranking the detected text instances with the learned similarity. Experiments on three benchmark datasets demonstrate our method consistently outperforms the state-of-the-art scene text spotting/retrieval approaches. In particular, the proposed framework of joint detection and similarity learning achieves significantly better performance than separated methods. Code is available at: https://github.com/lanfeng4659/STR-TDSL.
翻译:屏幕文本检索旨在从图像画廊定位和搜索所有文本实例,这些实例与给定的查询文本相同或相似。 通常通过将查询文本匹配到通过端到端的场景文本检测器输出的得到承认的文字来完成这一任务。 在本文中,我们通过直接学习查询文本与自然图像中每个文本实例之间的交叉模式相似性来解决这个问题。 具体地说, 我们建立了一个端到端的可培训网络, 共同优化现场文本检测和交叉模式相似性学习的程序。 这样, 现场文本检索可以简单地通过将所检测到的文本实例与所学到的相似性进行排序来完成。 对三个基准数据集的实验显示我们的方法一贯优于最先进的场景文本定位/检索方法。 特别是, 拟议的联合检测和类似性学习框架的性能比分离方法要好得多。 代码可在以下网址查阅: https://github.com/lanfeng4659/STR-TDSL。