Every Scene Text Recognition (STR) task consists of text localization \& text recognition as the prominent sub-tasks. However, in real-world applications with fixed camera positions such as equipment monitor reading, image-based data entry, and printed document data extraction, the underlying data tends to be regular scene text. Hence, in these tasks, the use of generic, bulky models comes up with significant disadvantages compared to customized, efficient models in terms of model deployability, data privacy \& model reliability. Therefore, this paper introduces the underlying concepts, theory, implementation, and experiment results to develop models, which are highly specialized for the task itself, to achieve not only the SOTA performance but also to have minimal model weights, shorter inference time, and high model reliability. We introduce a novel deep learning architecture (GeoTRNet), trained to identify digits in a regular scene image, only using the geometrical features present, mimicking human perception over text recognition. The code is publicly available at https://github.com/ACRA-FL/GeoTRNet
翻译:每一场的文本识别(STR)任务都包括文本本地化 文本识别 文本识别 作为突出的子任务 。 然而,在具有固定相机位置的现实世界应用程序中,例如设备监测读取、图像数据输入和印刷文件数据提取,基础数据往往是普通的现场文字。因此,在这些任务中,使用通用大宗模型与定制的高效模型相比,在模型可部署性、数据隐私和模型可靠性方面有很大的劣势。因此,本文件介绍了开发模型的基本概念、理论、实施和实验结果,这些模型对于任务本身高度专业化,不仅能够实现SOTA的性能,而且具有最低的模型重量、较短的推论时间和高的模型可靠性。我们引入了一个新的深层次学习结构(GeoTRNet),仅使用现有的几何特征才能在普通场景图像中识别数字,对文本识别模拟人类认知。该代码公布在https://github.com/ACRA-FL/GeoTRNet上。