Contrary to popular belief, Optical Character Recognition (OCR) remains a challenging problem when text occurs in unconstrained environments, like natural scenes, due to geometrical distortions, complex backgrounds, and diverse fonts. In this paper, we present a segmentation-free OCR system that combines deep learning methods, synthetic training data generation, and data augmentation techniques. We render synthetic training data using large text corpora and over 2000 fonts. To simulate text occurring in complex natural scenes, we augment extracted samples with geometric distortions and with a proposed data augmentation technique - alpha-compositing with background textures. Our models employ a convolutional neural network encoder to extract features from text images. Inspired by the recent progress in neural machine translation and language modeling, we examine the capabilities of both recurrent and convolutional neural networks in modeling the interactions between input elements.
翻译:与流行的信仰相反,光学特征识别(OCR)在像自然场景这样不受限制的环境中出现文本时,由于几何扭曲、背景复杂和字体多样,仍是一个具有挑战性的问题。在本文件中,我们介绍了一个无分解的OCR系统,该系统将深层学习方法、合成培训数据生成和数据增强技术结合起来。我们用大文本体和2000年以上的字体制作合成培训数据。为了模拟复杂的自然场景中出现的文本,我们增加了带有几何扭曲的抽取样本,并提出了数据增强技术――与背景纹理相融合。我们的模型使用一个卷发神经网络编码器从文本图像中提取特征。在神经机器翻译和语言建模方面的最新进展的启发下,我们研究了反复和动态神经网络在建模输入要素之间相互作用方面的能力。