Recently, Vision-Language Pre-training (VLP) techniques have greatly benefited various vision-language tasks by jointly learning visual and textual representations, which intuitively helps in Optical Character Recognition (OCR) tasks due to the rich visual and textual information in scene text images. However, these methods cannot well cope with OCR tasks because of the difficulty in both instance-level text encoding and image-text pair acquisition (i.e. images and captured texts in them). This paper presents a weakly supervised pre-training method, oCLIP, which can acquire effective scene text representations by jointly learning and aligning visual and textual information. Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features, respectively, as well as a visual-textual decoder that models the interaction among textual and visual features for learning effective scene text representations. With the learning of textual features, the pre-trained model can attend texts in images well with character awareness. Besides, these designs enable the learning from weakly annotated texts (i.e. partial texts in images without text bounding boxes) which mitigates the data annotation constraint greatly. Experiments over the weakly annotated images in ICDAR2019-LSVT show that our pre-trained model improves F-score by +2.5\% and +4.8\% while transferring its weights to other text detection and spotting networks, respectively. In addition, the proposed method outperforms existing pre-training techniques consistently across multiple public datasets (e.g., +3.2\% and +1.3\% for Total-Text and CTW1500).
翻译:最近,视觉语言培训前(VLP)技术通过共同学习视觉和文字表达方式,极大地帮助了各种视觉语言任务。由于现场文本图像中丰富的视觉和文字信息,这些技术直观地有助于光学字符识别(OCR)任务;然而,这些方法无法很好地应对OCR任务,因为在实例一级文本编码和图像-文本配对(即图像和其中的捕获文本)的获取方面都存在困难。本文展示了一种监督不力的训练前方法,OCLIP,通过共同学习和调整视觉和文字信息,可以获取有效的现场文字表述方式。我们的网络包括一个图像编码器和一个字符识别文本编码器(OCR),分别提取视觉和文字的特征和文字信息。由于学习了文字特征,预受培训的模型可以很好地阅读图像中的文本。此外,这些设计使得人们能够从较弱的附加说明文本(i.e.部分文本在图像中,不长的FAR绑定式框框中展示了我们的拟议文本和图像。