Scene text recognition (STR) task has a common practice: All state-of-the-art STR models are trained on large synthetic data. In contrast to this practice, training STR models only on fewer real labels (STR with fewer labels) is important when we have to train STR models without synthetic data: for handwritten or artistic texts that are difficult to generate synthetically and for languages other than English for which we do not always have synthetic data. However, there has been implicit common knowledge that training STR models on real data is nearly impossible because real data is insufficient. We consider that this common knowledge has obstructed the study of STR with fewer labels. In this work, we would like to reactivate STR with fewer labels by disproving the common knowledge. We consolidate recently accumulated public real data and show that we can train STR models satisfactorily only with real labeled data. Subsequently, we find simple data augmentation to fully exploit real data. Furthermore, we improve the models by collecting unlabeled data and introducing semi- and self-supervised methods. As a result, we obtain a competitive model to state-of-the-art methods. To the best of our knowledge, this is the first study that 1) shows sufficient performance by only using real labels and 2) introduces semi- and self-supervised methods into STR with fewer labels. Our code and data are available: https://github.com/ku21fan/STR-Fewer-Labels
翻译:显性文本识别(STR)任务具有一种共同做法:所有最先进的STR模型都受过关于大型合成数据的培训。与此不同,当我们必须在没有合成数据的情况下培训STR模型时,重要的是要对没有合成数据的STR模型进行培训:对于难以合成生成的手写文本或艺术文本,以及对于我们并不总是有合成数据的英文以外的语言而言,我们往往不言而喻地知道,由于真实数据不足,对STR模型的培训几乎是不可能的。我们认为,这种共同的知识阻碍了STR的标签较少的STR研究。在这项工作中,我们希望以较少的标签(STR)来重新启用STR模型。我们最近积累的公开数据表明,我们只能用真实的标签数据来令人满意地培训STR模型。随后,我们发现简单的数据增强功能,以便充分利用真实数据。此外,我们通过收集未加标记的数据和引入半自上和自我上下限的方法来改进模型。结果,我们获得的STR模型具有竞争性的模型,而以较少的标签形式展示了我们的最佳做法。