Text recognition is a long-standing research problem for document digitalization. Existing approaches are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments show that the TrOCR model outperforms the current state-of-the-art models on the printed, handwritten and scene text recognition tasks. The TrOCR models and code are publicly available at \url{https://aka.ms/trocr}.
翻译:文本识别是文件数字化的长期研究问题。 现有的方法通常以CNN为主,用于图像理解,RNN为主,而RNN为主,用于生成字符级文本。 此外,通常需要另一种语言模型来提高后处理步骤的总体准确性。 在本文中,我们提议了一种端对端文本识别方法,采用预先培训的图像变异器和文本变异器模型,即TrROCR,利用变异器结构来获取图像理解和单人级文本生成。TrROCR模型简单但有效,可以预先培训大型合成数据,并精细调整人类标签数据集。实验显示,TrOCR模型在印刷、手写和现场文本识别任务上超越了当前最先进的模型。TrOCR模型和代码在\url{https://akas.ms/trocr}上公开提供。