Typical text spotters follow the two-stage spotting paradigm which detects the boundary for a text instance first and then performs text recognition within the detected regions. Despite the remarkable progress of such spotting paradigm, an important limitation is that the performance of text recognition depends heavily on the precision of text detection, resulting in the potential error propagation from detection to recognition. In this work, we propose the single shot Self-Reliant Scene Text Spotter v2 (SRSTS v2), which circumvents this limitation by decoupling recognition from detection while optimizing two tasks collaboratively. Specifically, our SRSTS v2 samples representative feature points around each potential text instance, and conducts both text detection and recognition in parallel guided by these sampled points. Thus, the text recognition is no longer dependent on detection, thereby alleviating the error propagation from detection to recognition. Moreover, the sampling module is learned under the supervision from both detection and recognition, which allows for the collaborative optimization and mutual enhancement between two tasks. Benefiting from such sampling-driven concurrent spotting framework, our approach is able to recognize the text instances correctly even if the precise text boundaries are challenging to detect. Extensive experiments on four benchmarks demonstrate that our method compares favorably to state-of-the-art spotters.
翻译:典型的文本显示器遵循了先探测文本实例边界的两阶段分点模式,然后在所探测的区域进行文本识别。尽管这种分点模式取得了显著的进展,但一个重要的限制是文本识别的性能在很大程度上取决于文本检测的精确度,从而可能导致从检测到识别的错误传播。在这项工作中,我们建议采用单一镜头的“自Reliant Scene Text Spoteter v2”(SRSTS v2),通过将识别与检测脱钩,同时优化两项任务,从而绕过这一限制。具体地说,我们的SRSTS v2样本在每一个可能的文本实例上代表了特征点,并同时以这些抽样点为指导进行了文本检测和识别。因此,文本识别不再取决于检测,从而减轻了从检测到识别的错误传播。此外,在检测和识别的监管下学习了取样模块,从而可以使两种任务之间实现协作优化和相互增强。从这种取样驱动的同步定位框架中受益,我们的方法能够正确识别文本实例,即使精确的文本边界对探测有挑战。关于四个基准的广泛实验表明我们的方法是有利的。