In contrast to Connectionist Temporal Classification (CTC) approaches, Sequence-To-Sequence (S2S) models for Handwritten Text Recognition (HTR) suffer from errors such as skipped or repeated words which often occur at the end of a sequence. In this paper, to combine the best of both approaches, we propose to use the CTC-Prefix-Score during S2S decoding. Hereby, during beam search, paths that are invalid according to the CTC confidence matrix are penalised. Our network architecture is composed of a Convolutional Neural Network (CNN) as visual backbone, bidirectional Long-Short-Term-Memory-Cells (LSTMs) as encoder, and a decoder which is a Transformer with inserted mutual attention layers. The CTC confidences are computed on the encoder while the Transformer is only used for character-wise S2S decoding. We evaluate this setup on three HTR data sets: IAM, Rimes, and StAZH. On IAM, we achieve a competitive Character Error Rate (CER) of 2.95% when pretraining our model on synthetic data and including a character-based language model for contemporary English. Compared to other state-of-the-art approaches, our model requires about 10-20 times less parameters. Access our shared implementations via this link to GitHub: https://github.com/Planet-AI-GmbH/tfaip-hybrid-ctc-s2s.
翻译:与连接时间分类(CTC) 方法相反, 手写文本识别(HTR) 的序列- 序列(S2S) 模型存在错误, 例如在序列结尾处经常出现跳过或重复的单词。 在本文中, 为了将两种方法的最佳组合起来, 我们提议在 S2S 解码过程中使用 CT- Prefix- Score 。 在光束搜索中, 依据CTC 信任矩阵, 无效的路径会被处罚 。 我们的网络结构由以下三套HTR 数据集组成: IAM、 Rimes 和 StAZH 。 在 IAM 上, 我们通过模型( 双向) 长线- 长期- 中期- memory- Cell (LSTMs) 作为编码器, 和 解码器( 是插入了共同注意层的变换码器) 。 在编码上计算CTC 信任度, 而变换器只用于字符 S2S- S decoding。 我们用的三个 Hrial 数据集设置: IAM、 Rimet, 和 StAZH 。 在 IAM 上, 我们的模型中, 我们的模型需要一个竞争性的版本- breal- breal- b- blection- blection- bleglection- fal- fal- dald- dal- dal- disaldald 。