Nowadays, scene text recognition has attracted more and more attention due to its various applications. Most state-of-the-art methods adopt an encoder-decoder framework with attention mechanism, which generates text autoregressively from left to right. Despite the convincing performance, the speed is limited because of the one-by-one decoding strategy. As opposed to autoregressive models, non-autoregressive models predict the results in parallel with a much shorter inference time, but the accuracy falls behind the autoregressive counterpart considerably. In this paper, we propose a Parallel, Iterative and Mimicking Network (PIMNet) to balance accuracy and efficiency. Specifically, PIMNet adopts a parallel attention mechanism to predict the text faster and an iterative generation mechanism to make the predictions more accurate. In each iteration, the context information is fully explored. To improve learning of the hidden layer, we exploit the mimicking learning in the training phase, where an additional autoregressive decoder is adopted and the parallel decoder mimics the autoregressive decoder with fitting outputs of the hidden layer. With the shared backbone between the two decoders, the proposed PIMNet can be trained end-to-end without pre-training. During inference, the branch of the autoregressive decoder is removed for a faster speed. Extensive experiments on public benchmarks demonstrate the effectiveness and efficiency of PIMNet. Our code will be available at https://github.com/Pay20Y/PIMNet.
翻译:目前,场景文本的识别因其各种应用而吸引了越来越多的关注。 多数最先进的方法都采用了带有关注机制的编码器解码器框架( PIMNet ) 来平衡网络的准确性和效率。 具体而言, PIMNet 采用了一个平行的注意机制来预测文本的速度更快, 并且一个迭代生成机制来使预测更加准确。 在每次循环中,都会充分探索上下文信息。 为了改进对隐性层的学习,我们利用在培训阶段的模拟学习,在培训阶段将采用更多的自动解析器,而平行的解析网络( PIM Net ) 将平衡准确性和效率。 具体地说, PIM 使用一个平行的注意机制来预测文本更快, 迭代代生成机制来使预测更加准确。 在每次循环中, 要充分探索背景信息。 为了改进对隐性层的学习,我们将利用在培训阶段的模拟学习过程, 在那里会采用更多的自动解析式解析器, 平行的解析器将覆盖着我们隐藏的IM 。 在隐藏的图层中, 将共享的底部的底部的底部, 。