This work proposes an attention-based sequence-to-sequence model for handwritten word recognition and explores transfer learning for data-efficient training of HTR systems. To overcome training data scarcity, this work leverages models pre-trained on scene text images as a starting point towards tailoring the handwriting recognition models. ResNet feature extraction and bidirectional LSTM-based sequence modeling stages together form an encoder. The prediction stage consists of a decoder and a content-based attention mechanism. The effectiveness of the proposed end-to-end HTR system has been empirically evaluated on a novel multi-writer dataset Imgur5K and the IAM dataset. The experimental results evaluate the performance of the HTR framework, further supported by an in-depth analysis of the error cases. Source code and pre-trained models are available at https://github.com/dmitrijsk/AttentionHTR.
翻译:这项工作提出了以关注为基础的单词识别顺序到顺序模式,并探索为HTR系统的数据高效培训进行转移学习。为克服培训数据稀缺问题,这项工作利用了预先培训现场文本图像模型,作为调整笔迹识别模型的起点。ResNet特征提取和双向LSTM基于双向序列建模阶段形成一个编码器。预测阶段包括解码器和内容关注机制。拟议的终端到终端HTR系统的有效性已在新的多撰稿数据集Imgur5K和IMAM数据集上进行了经验性评估。实验结果评估了HTR框架的绩效,并得到了对错误案例的深入分析的进一步支持。源代码和预先培训模型可在https://github.com/dmitrijsk/AttencyHTR上查阅。