Context-aware STR methods typically use internal autoregressive (AR) language models (LM). Inherent limitations of AR models motivated two-stage methods which employ an external LM. The conditional independence of the external LM on the input image may cause it to erroneously rectify correct predictions, leading to significant inefficiencies. Our method, PARSeq, learns an ensemble of internal AR LMs with shared weights using Permutation Language Modeling. It unifies context-free non-AR and context-aware AR inference, and iterative refinement using bidirectional context. Using synthetic training data, PARSeq achieves state-of-the-art (SOTA) results in STR benchmarks (91.9% accuracy) and more challenging datasets. It establishes new SOTA results (96.0% accuracy) when trained on real data. PARSeq is optimal on accuracy vs parameter count, FLOPS, and latency because of its simple, unified structure and parallel token processing. Due to its extensive use of attention, it is robust on arbitrarily-oriented text which is common in real-world images. Code, pretrained weights, and data are available at: https://github.com/baudm/parseq.
翻译:外部LM在输入图像上的有条件独立可能会导致它错误地纠正正确的预测,导致显著的低效率。我们的方法PARSeq在实际数据培训中确定了新的SOTA结果(96.0%的准确性)。PARSeq在精确度和参数数、FLOPS和平行符号处理方面最优化,因为其简单、统一的结构和符号处理。由于受到广泛注意,PARSeq在任意导向的文本上非常可靠,在现实/世界图像中是常见的。