Sequence generation models have recently made significant progress in unifying various vision tasks. Although some auto-regressive models have demonstrated promising results in end-to-end text spotting, they use specific detection formats while ignoring various text shapes and are limited in the maximum number of text instances that can be detected. To overcome these limitations, we propose a UNIfied scene Text Spotter, called UNITS. Our model unifies various detection formats, including quadrilaterals and polygons, allowing it to detect text in arbitrary shapes. Additionally, we apply starting-point prompting to enable the model to extract texts from an arbitrary starting point, thereby extracting more texts beyond the number of instances it was trained on. Experimental results demonstrate that our method achieves competitive performance compared to state-of-the-art methods. Further analysis shows that UNITS can extract a larger number of texts than it was trained on. We provide the code for our method at https://github.com/clovaai/units.
翻译:序列生成模型最近在统一各种视觉任务方面取得了显著进展。尽管一些自回归模型在端到端文本识别中表现出有前途的结果,但它们使用特定的检测格式来忽略各种文本形状,并且在能够检测的文本实例数上受到限制。为了克服这些限制,我们提出了一种名为UNITS的UNIfied场景文本识别模型。我们的模型统一了各种检测格式,包括四边形和多边形,使其能够检测任意形状的文本。此外,我们应用了起点提示技术,使模型能够从任意起点提取文本,因此提取了更多超出其训练实例数量的文本。实验结果表明,我们的方法与最先进的方法相比表现出了竞争性的性能。进一步分析表明,UNITS能够提取比其训练数量更大的文本。我们在https://github.com/clovaai/units上提供了我们的方法代码。